Test Report: KVM_Linux_crio 17777

                    
                      ae144fcddc3654c644548c9cf831271f2087ad79:2023-12-13:32259
                    
                

Test fail (26/299)

Order failed test Duration
35 TestAddons/parallel/Ingress 158.65
48 TestAddons/StoppedEnableDisable 154.89
164 TestIngressAddonLegacy/serial/ValidateIngressAddons 170.88
212 TestMultiNode/serial/PingHostFrom2Pods 3.29
219 TestMultiNode/serial/RestartKeepsNodes 688.39
221 TestMultiNode/serial/StopMultiNode 143.11
228 TestPreload 265.55
234 TestRunningBinaryUpgrade 209.74
269 TestStoppedBinaryUpgrade/Upgrade 263.59
270 TestPause/serial/SecondStartNoReconfiguration 72.76
282 TestStartStop/group/old-k8s-version/serial/Stop 140.34
285 TestStartStop/group/embed-certs/serial/Stop 139.86
289 TestStartStop/group/no-preload/serial/Stop 139.85
291 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.81
292 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
300 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.26
301 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.22
302 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.13
303 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.32
304 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 353.18
305 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 457.46
306 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 312.28
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 179.27
x
+
TestAddons/parallel/Ingress (158.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-577685 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-577685 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-577685 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c4c14be0-be04-41cc-a432-9bd05871708b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c4c14be0-be04-41cc-a432-9bd05871708b] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.018862479s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-577685 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-577685 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.841655465s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-577685 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-577685 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.136
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-577685 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-577685 addons disable ingress-dns --alsologtostderr -v=1: (1.447857557s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-577685 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-577685 addons disable ingress --alsologtostderr -v=1: (7.768697977s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-577685 -n addons-577685
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-577685 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-577685 logs -n 25: (1.327273101s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-647419 | jenkins | v1.32.0 | 12 Dec 23 22:55 UTC |                     |
	|         | -p download-only-647419                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 12 Dec 23 22:56 UTC | 12 Dec 23 22:56 UTC |
	| delete  | -p download-only-647419                                                                     | download-only-647419 | jenkins | v1.32.0 | 12 Dec 23 22:56 UTC | 12 Dec 23 22:56 UTC |
	| delete  | -p download-only-647419                                                                     | download-only-647419 | jenkins | v1.32.0 | 12 Dec 23 22:56 UTC | 12 Dec 23 22:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-191948 | jenkins | v1.32.0 | 12 Dec 23 22:56 UTC |                     |
	|         | binary-mirror-191948                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40327                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-191948                                                                     | binary-mirror-191948 | jenkins | v1.32.0 | 12 Dec 23 22:56 UTC | 12 Dec 23 22:56 UTC |
	| addons  | disable dashboard -p                                                                        | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 22:56 UTC |                     |
	|         | addons-577685                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 22:56 UTC |                     |
	|         | addons-577685                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-577685 --wait=true                                                                | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 22:56 UTC | 12 Dec 23 23:00 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-577685 addons                                                                        | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:00 UTC | 12 Dec 23 23:00 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:00 UTC | 12 Dec 23 23:00 UTC |
	|         | addons-577685                                                                               |                      |         |         |                     |                     |
	| addons  | addons-577685 addons disable                                                                | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:00 UTC | 12 Dec 23 23:00 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-577685 ip                                                                            | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:00 UTC | 12 Dec 23 23:00 UTC |
	| addons  | addons-577685 addons disable                                                                | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:00 UTC | 12 Dec 23 23:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:00 UTC | 12 Dec 23 23:00 UTC |
	|         | -p addons-577685                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-577685 ssh curl -s                                                                   | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:00 UTC | 12 Dec 23 23:00 UTC |
	|         | addons-577685                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:00 UTC | 12 Dec 23 23:00 UTC |
	|         | -p addons-577685                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-577685 ssh cat                                                                       | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:00 UTC | 12 Dec 23 23:00 UTC |
	|         | /opt/local-path-provisioner/pvc-4645bbf6-7858-4980-ba0f-98b14aad17a1_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-577685 addons disable                                                                | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:00 UTC | 12 Dec 23 23:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-577685 addons                                                                        | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:00 UTC | 12 Dec 23 23:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-577685 addons                                                                        | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:01 UTC | 12 Dec 23 23:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-577685 ip                                                                            | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:02 UTC | 12 Dec 23 23:02 UTC |
	| addons  | addons-577685 addons disable                                                                | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:02 UTC | 12 Dec 23 23:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-577685 addons disable                                                                | addons-577685        | jenkins | v1.32.0 | 12 Dec 23 23:02 UTC | 12 Dec 23 23:03 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:56:33
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:56:33.098581  144147 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:56:33.098840  144147 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:56:33.098851  144147 out.go:309] Setting ErrFile to fd 2...
	I1212 22:56:33.098855  144147 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:56:33.099079  144147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1212 22:56:33.099723  144147 out.go:303] Setting JSON to false
	I1212 22:56:33.100643  144147 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5941,"bootTime":1702415852,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:56:33.100700  144147 start.go:138] virtualization: kvm guest
	I1212 22:56:33.102852  144147 out.go:177] * [addons-577685] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:56:33.104392  144147 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 22:56:33.105720  144147 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:56:33.104478  144147 notify.go:220] Checking for updates...
	I1212 22:56:33.108409  144147 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 22:56:33.110099  144147 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 22:56:33.111755  144147 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:56:33.113283  144147 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:56:33.114986  144147 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:56:33.145907  144147 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 22:56:33.147399  144147 start.go:298] selected driver: kvm2
	I1212 22:56:33.147426  144147 start.go:902] validating driver "kvm2" against <nil>
	I1212 22:56:33.147438  144147 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:56:33.148385  144147 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:56:33.148481  144147 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 22:56:33.162281  144147 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 22:56:33.162351  144147 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:56:33.162568  144147 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 22:56:33.162617  144147 cni.go:84] Creating CNI manager for ""
	I1212 22:56:33.162629  144147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:56:33.162637  144147 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 22:56:33.162644  144147 start_flags.go:323] config:
	{Name:addons-577685 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-577685 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:56:33.162758  144147 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:56:33.164726  144147 out.go:177] * Starting control plane node addons-577685 in cluster addons-577685
	I1212 22:56:33.166179  144147 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:56:33.166210  144147 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 22:56:33.166216  144147 cache.go:56] Caching tarball of preloaded images
	I1212 22:56:33.166278  144147 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 22:56:33.166288  144147 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 22:56:33.166589  144147 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/config.json ...
	I1212 22:56:33.166610  144147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/config.json: {Name:mk0ccd22f49db2a524eb9314daad5bbe2bba30ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:56:33.166738  144147 start.go:365] acquiring machines lock for addons-577685: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 22:56:33.166777  144147 start.go:369] acquired machines lock for "addons-577685" in 26.047µs
	I1212 22:56:33.166790  144147 start.go:93] Provisioning new machine with config: &{Name:addons-577685 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-577685 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:56:33.166855  144147 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 22:56:33.168751  144147 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1212 22:56:33.168872  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:56:33.168909  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:56:33.182694  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38431
	I1212 22:56:33.183125  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:56:33.183632  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:56:33.183655  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:56:33.184013  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:56:33.184199  144147 main.go:141] libmachine: (addons-577685) Calling .GetMachineName
	I1212 22:56:33.184416  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:56:33.184663  144147 start.go:159] libmachine.API.Create for "addons-577685" (driver="kvm2")
	I1212 22:56:33.184698  144147 client.go:168] LocalClient.Create starting
	I1212 22:56:33.184734  144147 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem
	I1212 22:56:33.270109  144147 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem
	I1212 22:56:33.779894  144147 main.go:141] libmachine: Running pre-create checks...
	I1212 22:56:33.779921  144147 main.go:141] libmachine: (addons-577685) Calling .PreCreateCheck
	I1212 22:56:33.780529  144147 main.go:141] libmachine: (addons-577685) Calling .GetConfigRaw
	I1212 22:56:33.781016  144147 main.go:141] libmachine: Creating machine...
	I1212 22:56:33.781032  144147 main.go:141] libmachine: (addons-577685) Calling .Create
	I1212 22:56:33.781176  144147 main.go:141] libmachine: (addons-577685) Creating KVM machine...
	I1212 22:56:33.782469  144147 main.go:141] libmachine: (addons-577685) DBG | found existing default KVM network
	I1212 22:56:33.783166  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:33.783020  144168 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147900}
	I1212 22:56:33.788819  144147 main.go:141] libmachine: (addons-577685) DBG | trying to create private KVM network mk-addons-577685 192.168.39.0/24...
	I1212 22:56:33.857039  144147 main.go:141] libmachine: (addons-577685) DBG | private KVM network mk-addons-577685 192.168.39.0/24 created
	I1212 22:56:33.857072  144147 main.go:141] libmachine: (addons-577685) Setting up store path in /home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685 ...
	I1212 22:56:33.857089  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:33.857023  144168 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 22:56:33.857155  144147 main.go:141] libmachine: (addons-577685) Building disk image from file:///home/jenkins/minikube-integration/17777-136241/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 22:56:33.857191  144147 main.go:141] libmachine: (addons-577685) Downloading /home/jenkins/minikube-integration/17777-136241/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17777-136241/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 22:56:34.093977  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:34.093843  144168 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa...
	I1212 22:56:34.174452  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:34.174304  144168 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/addons-577685.rawdisk...
	I1212 22:56:34.174486  144147 main.go:141] libmachine: (addons-577685) DBG | Writing magic tar header
	I1212 22:56:34.174509  144147 main.go:141] libmachine: (addons-577685) DBG | Writing SSH key tar header
	I1212 22:56:34.174528  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:34.174416  144168 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685 ...
	I1212 22:56:34.174543  144147 main.go:141] libmachine: (addons-577685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685
	I1212 22:56:34.174550  144147 main.go:141] libmachine: (addons-577685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube/machines
	I1212 22:56:34.174564  144147 main.go:141] libmachine: (addons-577685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 22:56:34.174571  144147 main.go:141] libmachine: (addons-577685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241
	I1212 22:56:34.174580  144147 main.go:141] libmachine: (addons-577685) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685 (perms=drwx------)
	I1212 22:56:34.174606  144147 main.go:141] libmachine: (addons-577685) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube/machines (perms=drwxr-xr-x)
	I1212 22:56:34.174629  144147 main.go:141] libmachine: (addons-577685) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube (perms=drwxr-xr-x)
	I1212 22:56:34.174636  144147 main.go:141] libmachine: (addons-577685) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 22:56:34.174644  144147 main.go:141] libmachine: (addons-577685) DBG | Checking permissions on dir: /home/jenkins
	I1212 22:56:34.174649  144147 main.go:141] libmachine: (addons-577685) DBG | Checking permissions on dir: /home
	I1212 22:56:34.174660  144147 main.go:141] libmachine: (addons-577685) DBG | Skipping /home - not owner
	I1212 22:56:34.174668  144147 main.go:141] libmachine: (addons-577685) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241 (perms=drwxrwxr-x)
	I1212 22:56:34.174675  144147 main.go:141] libmachine: (addons-577685) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 22:56:34.174682  144147 main.go:141] libmachine: (addons-577685) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 22:56:34.174687  144147 main.go:141] libmachine: (addons-577685) Creating domain...
	I1212 22:56:34.175950  144147 main.go:141] libmachine: (addons-577685) define libvirt domain using xml: 
	I1212 22:56:34.175983  144147 main.go:141] libmachine: (addons-577685) <domain type='kvm'>
	I1212 22:56:34.175995  144147 main.go:141] libmachine: (addons-577685)   <name>addons-577685</name>
	I1212 22:56:34.176004  144147 main.go:141] libmachine: (addons-577685)   <memory unit='MiB'>4000</memory>
	I1212 22:56:34.176013  144147 main.go:141] libmachine: (addons-577685)   <vcpu>2</vcpu>
	I1212 22:56:34.176033  144147 main.go:141] libmachine: (addons-577685)   <features>
	I1212 22:56:34.176045  144147 main.go:141] libmachine: (addons-577685)     <acpi/>
	I1212 22:56:34.176058  144147 main.go:141] libmachine: (addons-577685)     <apic/>
	I1212 22:56:34.176066  144147 main.go:141] libmachine: (addons-577685)     <pae/>
	I1212 22:56:34.176072  144147 main.go:141] libmachine: (addons-577685)     
	I1212 22:56:34.176078  144147 main.go:141] libmachine: (addons-577685)   </features>
	I1212 22:56:34.176093  144147 main.go:141] libmachine: (addons-577685)   <cpu mode='host-passthrough'>
	I1212 22:56:34.176106  144147 main.go:141] libmachine: (addons-577685)   
	I1212 22:56:34.176118  144147 main.go:141] libmachine: (addons-577685)   </cpu>
	I1212 22:56:34.176132  144147 main.go:141] libmachine: (addons-577685)   <os>
	I1212 22:56:34.176145  144147 main.go:141] libmachine: (addons-577685)     <type>hvm</type>
	I1212 22:56:34.176158  144147 main.go:141] libmachine: (addons-577685)     <boot dev='cdrom'/>
	I1212 22:56:34.176173  144147 main.go:141] libmachine: (addons-577685)     <boot dev='hd'/>
	I1212 22:56:34.176186  144147 main.go:141] libmachine: (addons-577685)     <bootmenu enable='no'/>
	I1212 22:56:34.176198  144147 main.go:141] libmachine: (addons-577685)   </os>
	I1212 22:56:34.176212  144147 main.go:141] libmachine: (addons-577685)   <devices>
	I1212 22:56:34.176225  144147 main.go:141] libmachine: (addons-577685)     <disk type='file' device='cdrom'>
	I1212 22:56:34.176250  144147 main.go:141] libmachine: (addons-577685)       <source file='/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/boot2docker.iso'/>
	I1212 22:56:34.176268  144147 main.go:141] libmachine: (addons-577685)       <target dev='hdc' bus='scsi'/>
	I1212 22:56:34.176281  144147 main.go:141] libmachine: (addons-577685)       <readonly/>
	I1212 22:56:34.176292  144147 main.go:141] libmachine: (addons-577685)     </disk>
	I1212 22:56:34.176306  144147 main.go:141] libmachine: (addons-577685)     <disk type='file' device='disk'>
	I1212 22:56:34.176320  144147 main.go:141] libmachine: (addons-577685)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 22:56:34.176338  144147 main.go:141] libmachine: (addons-577685)       <source file='/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/addons-577685.rawdisk'/>
	I1212 22:56:34.176355  144147 main.go:141] libmachine: (addons-577685)       <target dev='hda' bus='virtio'/>
	I1212 22:56:34.176368  144147 main.go:141] libmachine: (addons-577685)     </disk>
	I1212 22:56:34.176380  144147 main.go:141] libmachine: (addons-577685)     <interface type='network'>
	I1212 22:56:34.176394  144147 main.go:141] libmachine: (addons-577685)       <source network='mk-addons-577685'/>
	I1212 22:56:34.176402  144147 main.go:141] libmachine: (addons-577685)       <model type='virtio'/>
	I1212 22:56:34.176414  144147 main.go:141] libmachine: (addons-577685)     </interface>
	I1212 22:56:34.176458  144147 main.go:141] libmachine: (addons-577685)     <interface type='network'>
	I1212 22:56:34.176479  144147 main.go:141] libmachine: (addons-577685)       <source network='default'/>
	I1212 22:56:34.176491  144147 main.go:141] libmachine: (addons-577685)       <model type='virtio'/>
	I1212 22:56:34.176503  144147 main.go:141] libmachine: (addons-577685)     </interface>
	I1212 22:56:34.176512  144147 main.go:141] libmachine: (addons-577685)     <serial type='pty'>
	I1212 22:56:34.176524  144147 main.go:141] libmachine: (addons-577685)       <target port='0'/>
	I1212 22:56:34.176536  144147 main.go:141] libmachine: (addons-577685)     </serial>
	I1212 22:56:34.176577  144147 main.go:141] libmachine: (addons-577685)     <console type='pty'>
	I1212 22:56:34.176614  144147 main.go:141] libmachine: (addons-577685)       <target type='serial' port='0'/>
	I1212 22:56:34.176628  144147 main.go:141] libmachine: (addons-577685)     </console>
	I1212 22:56:34.176644  144147 main.go:141] libmachine: (addons-577685)     <rng model='virtio'>
	I1212 22:56:34.176686  144147 main.go:141] libmachine: (addons-577685)       <backend model='random'>/dev/random</backend>
	I1212 22:56:34.176710  144147 main.go:141] libmachine: (addons-577685)     </rng>
	I1212 22:56:34.176724  144147 main.go:141] libmachine: (addons-577685)     
	I1212 22:56:34.176739  144147 main.go:141] libmachine: (addons-577685)     
	I1212 22:56:34.176753  144147 main.go:141] libmachine: (addons-577685)   </devices>
	I1212 22:56:34.176764  144147 main.go:141] libmachine: (addons-577685) </domain>
	I1212 22:56:34.176780  144147 main.go:141] libmachine: (addons-577685) 
	I1212 22:56:34.180703  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:0e:35:4a in network default
	I1212 22:56:34.181283  144147 main.go:141] libmachine: (addons-577685) Ensuring networks are active...
	I1212 22:56:34.181299  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:34.181905  144147 main.go:141] libmachine: (addons-577685) Ensuring network default is active
	I1212 22:56:34.182267  144147 main.go:141] libmachine: (addons-577685) Ensuring network mk-addons-577685 is active
	I1212 22:56:34.182663  144147 main.go:141] libmachine: (addons-577685) Getting domain xml...
	I1212 22:56:34.183517  144147 main.go:141] libmachine: (addons-577685) Creating domain...
	I1212 22:56:35.373344  144147 main.go:141] libmachine: (addons-577685) Waiting to get IP...
	I1212 22:56:35.374127  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:35.374481  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:35.374556  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:35.374482  144168 retry.go:31] will retry after 228.959726ms: waiting for machine to come up
	I1212 22:56:35.605029  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:35.605506  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:35.605534  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:35.605457  144168 retry.go:31] will retry after 301.55449ms: waiting for machine to come up
	I1212 22:56:35.909014  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:35.909423  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:35.909455  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:35.909362  144168 retry.go:31] will retry after 310.872994ms: waiting for machine to come up
	I1212 22:56:36.221945  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:36.222477  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:36.222505  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:36.222422  144168 retry.go:31] will retry after 562.204752ms: waiting for machine to come up
	I1212 22:56:36.786063  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:36.786543  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:36.786573  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:36.786490  144168 retry.go:31] will retry after 669.218983ms: waiting for machine to come up
	I1212 22:56:37.457062  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:37.457507  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:37.457539  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:37.457461  144168 retry.go:31] will retry after 727.68366ms: waiting for machine to come up
	I1212 22:56:38.186721  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:38.187187  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:38.187213  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:38.187151  144168 retry.go:31] will retry after 761.806577ms: waiting for machine to come up
	I1212 22:56:38.950181  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:38.950733  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:38.950766  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:38.950645  144168 retry.go:31] will retry after 1.058137855s: waiting for machine to come up
	I1212 22:56:40.010922  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:40.011463  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:40.011493  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:40.011414  144168 retry.go:31] will retry after 1.15939038s: waiting for machine to come up
	I1212 22:56:41.172595  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:41.172982  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:41.173011  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:41.172937  144168 retry.go:31] will retry after 1.751417278s: waiting for machine to come up
	I1212 22:56:42.926931  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:42.927343  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:42.927372  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:42.927295  144168 retry.go:31] will retry after 2.38550359s: waiting for machine to come up
	I1212 22:56:45.315691  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:45.316154  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:45.316184  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:45.316065  144168 retry.go:31] will retry after 2.673706812s: waiting for machine to come up
	I1212 22:56:47.992906  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:47.993294  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:47.993326  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:47.993249  144168 retry.go:31] will retry after 4.075463799s: waiting for machine to come up
	I1212 22:56:52.070614  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:52.071047  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find current IP address of domain addons-577685 in network mk-addons-577685
	I1212 22:56:52.071084  144147 main.go:141] libmachine: (addons-577685) DBG | I1212 22:56:52.070988  144168 retry.go:31] will retry after 4.729468537s: waiting for machine to come up
	I1212 22:56:56.804893  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:56.805420  144147 main.go:141] libmachine: (addons-577685) Found IP for machine: 192.168.39.136
	I1212 22:56:56.805448  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has current primary IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:56.805464  144147 main.go:141] libmachine: (addons-577685) Reserving static IP address...
	I1212 22:56:56.805896  144147 main.go:141] libmachine: (addons-577685) DBG | unable to find host DHCP lease matching {name: "addons-577685", mac: "52:54:00:83:15:31", ip: "192.168.39.136"} in network mk-addons-577685
	I1212 22:56:56.876084  144147 main.go:141] libmachine: (addons-577685) DBG | Getting to WaitForSSH function...
	I1212 22:56:56.876116  144147 main.go:141] libmachine: (addons-577685) Reserved static IP address: 192.168.39.136
	I1212 22:56:56.876130  144147 main.go:141] libmachine: (addons-577685) Waiting for SSH to be available...
	I1212 22:56:56.878876  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:56.879298  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:minikube Clientid:01:52:54:00:83:15:31}
	I1212 22:56:56.879323  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:56.879490  144147 main.go:141] libmachine: (addons-577685) DBG | Using SSH client type: external
	I1212 22:56:56.879517  144147 main.go:141] libmachine: (addons-577685) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa (-rw-------)
	I1212 22:56:56.879543  144147 main.go:141] libmachine: (addons-577685) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 22:56:56.879553  144147 main.go:141] libmachine: (addons-577685) DBG | About to run SSH command:
	I1212 22:56:56.879566  144147 main.go:141] libmachine: (addons-577685) DBG | exit 0
	I1212 22:56:56.972677  144147 main.go:141] libmachine: (addons-577685) DBG | SSH cmd err, output: <nil>: 
	I1212 22:56:56.972942  144147 main.go:141] libmachine: (addons-577685) KVM machine creation complete!
	I1212 22:56:56.973368  144147 main.go:141] libmachine: (addons-577685) Calling .GetConfigRaw
	I1212 22:56:56.973924  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:56:56.974100  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:56:56.974269  144147 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 22:56:56.974281  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:56:56.975390  144147 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 22:56:56.975403  144147 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 22:56:56.975409  144147 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 22:56:56.975415  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:56:56.977570  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:56.977995  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:56.978025  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:56.978183  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:56:56.978345  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:56.978549  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:56.978714  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:56:56.978888  144147 main.go:141] libmachine: Using SSH client type: native
	I1212 22:56:56.979242  144147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1212 22:56:56.979254  144147 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 22:56:57.095753  144147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:56:57.095783  144147 main.go:141] libmachine: Detecting the provisioner...
	I1212 22:56:57.095809  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:56:57.099845  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:57.100231  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:57.100253  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:57.100424  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:56:57.100653  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:57.100843  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:57.100992  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:56:57.101159  144147 main.go:141] libmachine: Using SSH client type: native
	I1212 22:56:57.101523  144147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1212 22:56:57.101550  144147 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 22:56:57.220873  144147 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 22:56:57.220960  144147 main.go:141] libmachine: found compatible host: buildroot
	I1212 22:56:57.220986  144147 main.go:141] libmachine: Provisioning with buildroot...
	I1212 22:56:57.220997  144147 main.go:141] libmachine: (addons-577685) Calling .GetMachineName
	I1212 22:56:57.221232  144147 buildroot.go:166] provisioning hostname "addons-577685"
	I1212 22:56:57.221262  144147 main.go:141] libmachine: (addons-577685) Calling .GetMachineName
	I1212 22:56:57.221447  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:56:57.224138  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:57.224536  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:57.224571  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:57.224670  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:56:57.224848  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:57.225041  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:57.225199  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:56:57.225360  144147 main.go:141] libmachine: Using SSH client type: native
	I1212 22:56:57.225691  144147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1212 22:56:57.225704  144147 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-577685 && echo "addons-577685" | sudo tee /etc/hostname
	I1212 22:56:57.359556  144147 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-577685
	
	I1212 22:56:57.359590  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:56:57.362309  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:57.362703  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:57.362733  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:57.362918  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:56:57.363276  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:57.363443  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:57.363587  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:56:57.363759  144147 main.go:141] libmachine: Using SSH client type: native
	I1212 22:56:57.364079  144147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1212 22:56:57.364096  144147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-577685' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-577685/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-577685' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:56:57.487561  144147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:56:57.487595  144147 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1212 22:56:57.487616  144147 buildroot.go:174] setting up certificates
	I1212 22:56:57.487628  144147 provision.go:83] configureAuth start
	I1212 22:56:57.487637  144147 main.go:141] libmachine: (addons-577685) Calling .GetMachineName
	I1212 22:56:57.487920  144147 main.go:141] libmachine: (addons-577685) Calling .GetIP
	I1212 22:56:57.490368  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:57.490683  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:57.490712  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:57.490919  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:56:57.493289  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:57.493586  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:57.493611  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:57.493727  144147 provision.go:138] copyHostCerts
	I1212 22:56:57.493808  144147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1212 22:56:57.493971  144147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1212 22:56:57.494053  144147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1212 22:56:57.494148  144147 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.addons-577685 san=[192.168.39.136 192.168.39.136 localhost 127.0.0.1 minikube addons-577685]
	I1212 22:56:57.895402  144147 provision.go:172] copyRemoteCerts
	I1212 22:56:57.895487  144147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:56:57.895520  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:56:57.898127  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:57.898435  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:57.898463  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:57.898605  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:56:57.898790  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:57.898956  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:56:57.899147  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:56:57.992274  144147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 22:56:58.015551  144147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 22:56:58.038277  144147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:56:58.060232  144147 provision.go:86] duration metric: configureAuth took 572.587756ms
	I1212 22:56:58.060262  144147 buildroot.go:189] setting minikube options for container-runtime
	I1212 22:56:58.060508  144147 config.go:182] Loaded profile config "addons-577685": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:56:58.060598  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:56:58.063190  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.063491  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:58.063517  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.063750  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:56:58.063967  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:58.064111  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:58.064282  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:56:58.064412  144147 main.go:141] libmachine: Using SSH client type: native
	I1212 22:56:58.064757  144147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1212 22:56:58.064783  144147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:56:58.375046  144147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:56:58.375079  144147 main.go:141] libmachine: Checking connection to Docker...
	I1212 22:56:58.375096  144147 main.go:141] libmachine: (addons-577685) Calling .GetURL
	I1212 22:56:58.376370  144147 main.go:141] libmachine: (addons-577685) DBG | Using libvirt version 6000000
	I1212 22:56:58.378243  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.378516  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:58.378544  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.378656  144147 main.go:141] libmachine: Docker is up and running!
	I1212 22:56:58.378683  144147 main.go:141] libmachine: Reticulating splines...
	I1212 22:56:58.378690  144147 client.go:171] LocalClient.Create took 25.193983904s
	I1212 22:56:58.378708  144147 start.go:167] duration metric: libmachine.API.Create for "addons-577685" took 25.194048049s
	I1212 22:56:58.378716  144147 start.go:300] post-start starting for "addons-577685" (driver="kvm2")
	I1212 22:56:58.378728  144147 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:56:58.378743  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:56:58.379004  144147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:56:58.379028  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:56:58.381131  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.381395  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:58.381423  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.381569  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:56:58.381723  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:58.381853  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:56:58.382010  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:56:58.469627  144147 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:56:58.473924  144147 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 22:56:58.473952  144147 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1212 22:56:58.474044  144147 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1212 22:56:58.474080  144147 start.go:303] post-start completed in 95.35508ms
	I1212 22:56:58.474123  144147 main.go:141] libmachine: (addons-577685) Calling .GetConfigRaw
	I1212 22:56:58.474742  144147 main.go:141] libmachine: (addons-577685) Calling .GetIP
	I1212 22:56:58.477687  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.478043  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:58.478083  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.478233  144147 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/config.json ...
	I1212 22:56:58.478408  144147 start.go:128] duration metric: createHost completed in 25.31154211s
	I1212 22:56:58.478455  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:56:58.481349  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.481698  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:58.481728  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.481871  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:56:58.482049  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:58.482221  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:58.482353  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:56:58.482510  144147 main.go:141] libmachine: Using SSH client type: native
	I1212 22:56:58.482831  144147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1212 22:56:58.482847  144147 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 22:56:58.601111  144147 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702421818.572058703
	
	I1212 22:56:58.601144  144147 fix.go:206] guest clock: 1702421818.572058703
	I1212 22:56:58.601155  144147 fix.go:219] Guest: 2023-12-12 22:56:58.572058703 +0000 UTC Remote: 2023-12-12 22:56:58.478420474 +0000 UTC m=+25.426127411 (delta=93.638229ms)
	I1212 22:56:58.601205  144147 fix.go:190] guest clock delta is within tolerance: 93.638229ms
	I1212 22:56:58.601216  144147 start.go:83] releasing machines lock for "addons-577685", held for 25.434432069s
	I1212 22:56:58.601248  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:56:58.601554  144147 main.go:141] libmachine: (addons-577685) Calling .GetIP
	I1212 22:56:58.604511  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.604877  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:58.604908  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.605086  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:56:58.605584  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:56:58.605739  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:56:58.605849  144147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:56:58.605906  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:56:58.605935  144147 ssh_runner.go:195] Run: cat /version.json
	I1212 22:56:58.605959  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:56:58.608207  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.608307  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.608587  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:58.608613  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.608644  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:58.608664  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:58.608759  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:56:58.608971  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:58.609004  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:56:58.609174  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:56:58.609244  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:56:58.609348  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:56:58.609420  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:56:58.609547  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:56:58.714627  144147 ssh_runner.go:195] Run: systemctl --version
	I1212 22:56:58.720135  144147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:56:58.881236  144147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 22:56:58.887576  144147 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 22:56:58.887649  144147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:56:58.902336  144147 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 22:56:58.902360  144147 start.go:475] detecting cgroup driver to use...
	I1212 22:56:58.902423  144147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:56:58.919003  144147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:56:58.930833  144147 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:56:58.930885  144147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:56:58.942588  144147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:56:58.954552  144147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 22:56:59.054446  144147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:56:59.168086  144147 docker.go:219] disabling docker service ...
	I1212 22:56:59.168178  144147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:56:59.181072  144147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:56:59.192610  144147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:56:59.298298  144147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:56:59.400005  144147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:56:59.411734  144147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:56:59.428445  144147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 22:56:59.428506  144147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:56:59.437113  144147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 22:56:59.437165  144147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:56:59.445553  144147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:56:59.454137  144147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:56:59.462541  144147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:56:59.471466  144147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:56:59.478935  144147 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 22:56:59.479001  144147 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 22:56:59.491318  144147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:56:59.499781  144147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:56:59.612387  144147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 22:56:59.790115  144147 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 22:56:59.790209  144147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 22:56:59.799277  144147 start.go:543] Will wait 60s for crictl version
	I1212 22:56:59.799366  144147 ssh_runner.go:195] Run: which crictl
	I1212 22:56:59.803374  144147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:56:59.844699  144147 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 22:56:59.844833  144147 ssh_runner.go:195] Run: crio --version
	I1212 22:56:59.888821  144147 ssh_runner.go:195] Run: crio --version
	I1212 22:56:59.945315  144147 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 22:56:59.946714  144147 main.go:141] libmachine: (addons-577685) Calling .GetIP
	I1212 22:56:59.949550  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:59.949957  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:56:59.949985  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:56:59.950157  144147 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 22:56:59.954607  144147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:56:59.968129  144147 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:56:59.968185  144147 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:57:00.003503  144147 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 22:57:00.003571  144147 ssh_runner.go:195] Run: which lz4
	I1212 22:57:00.007398  144147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 22:57:00.011427  144147 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 22:57:00.011475  144147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 22:57:01.762234  144147 crio.go:444] Took 1.754852 seconds to copy over tarball
	I1212 22:57:01.762302  144147 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 22:57:04.778236  144147 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.015900801s)
	I1212 22:57:04.778263  144147 crio.go:451] Took 3.016001 seconds to extract the tarball
	I1212 22:57:04.778272  144147 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 22:57:04.819411  144147 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:57:04.891630  144147 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 22:57:04.891655  144147 cache_images.go:84] Images are preloaded, skipping loading
	I1212 22:57:04.891713  144147 ssh_runner.go:195] Run: crio config
	I1212 22:57:04.955445  144147 cni.go:84] Creating CNI manager for ""
	I1212 22:57:04.955471  144147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:57:04.955494  144147 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:57:04.955519  144147 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.136 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-577685 NodeName:addons-577685 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 22:57:04.955689  144147 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-577685"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:57:04.955786  144147 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-577685 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-577685 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:57:04.955854  144147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 22:57:04.964800  144147 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 22:57:04.964913  144147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 22:57:04.973361  144147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1212 22:57:04.990490  144147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 22:57:05.007684  144147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1212 22:57:05.024260  144147 ssh_runner.go:195] Run: grep 192.168.39.136	control-plane.minikube.internal$ /etc/hosts
	I1212 22:57:05.027982  144147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:57:05.041071  144147 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685 for IP: 192.168.39.136
	I1212 22:57:05.041111  144147 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:57:05.041274  144147 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1212 22:57:05.145231  144147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt ...
	I1212 22:57:05.145263  144147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt: {Name:mk4e943d26dd36b6d1bccd155b7fd5a1e1ea97fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:57:05.145436  144147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key ...
	I1212 22:57:05.145448  144147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key: {Name:mk6b808a446af4d06ea6d0a77b196d5a9e3477c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:57:05.145521  144147 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1212 22:57:05.317906  144147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt ...
	I1212 22:57:05.317938  144147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt: {Name:mk4b03a4e784050012f491d255396348e9b8d004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:57:05.318086  144147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key ...
	I1212 22:57:05.318097  144147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key: {Name:mk9df112480b61516a223cdb783f75ca1475d9fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:57:05.318195  144147 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.key
	I1212 22:57:05.318210  144147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt with IP's: []
	I1212 22:57:05.426373  144147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt ...
	I1212 22:57:05.426405  144147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: {Name:mkd1f5e0431dc1e55d74919f74ef680b0fd0fd31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:57:05.426587  144147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.key ...
	I1212 22:57:05.426603  144147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.key: {Name:mk83a8aade8b0bb7ee44304c0446b36a87b8450c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:57:05.426701  144147 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/apiserver.key.015ac7b4
	I1212 22:57:05.426724  144147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/apiserver.crt.015ac7b4 with IP's: [192.168.39.136 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 22:57:05.595086  144147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/apiserver.crt.015ac7b4 ...
	I1212 22:57:05.595120  144147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/apiserver.crt.015ac7b4: {Name:mk3494abc94bbda241553b53af0898d679e12b1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:57:05.595291  144147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/apiserver.key.015ac7b4 ...
	I1212 22:57:05.595307  144147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/apiserver.key.015ac7b4: {Name:mk7b251390b8cd855dd0caf686f513db6d7392cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:57:05.595373  144147 certs.go:337] copying /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/apiserver.crt.015ac7b4 -> /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/apiserver.crt
	I1212 22:57:05.595437  144147 certs.go:341] copying /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/apiserver.key.015ac7b4 -> /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/apiserver.key
	I1212 22:57:05.595479  144147 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/proxy-client.key
	I1212 22:57:05.595495  144147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/proxy-client.crt with IP's: []
	I1212 22:57:05.706661  144147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/proxy-client.crt ...
	I1212 22:57:05.706692  144147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/proxy-client.crt: {Name:mkc5a91ae8b2c1f45bfef328a1787c228f36bc78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:57:05.706846  144147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/proxy-client.key ...
	I1212 22:57:05.706860  144147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/proxy-client.key: {Name:mk24a43daf2add5159a29472eb0835565c551db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:57:05.707025  144147 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 22:57:05.707060  144147 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1212 22:57:05.707084  144147 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1212 22:57:05.707107  144147 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1212 22:57:05.707667  144147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 22:57:05.733439  144147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 22:57:05.758896  144147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 22:57:05.783694  144147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 22:57:05.806930  144147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:57:05.831040  144147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 22:57:05.855870  144147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:57:05.880383  144147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 22:57:05.904207  144147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:57:05.927806  144147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 22:57:05.945119  144147 ssh_runner.go:195] Run: openssl version
	I1212 22:57:05.951210  144147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:57:05.961301  144147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:57:05.966221  144147 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:57:05.966292  144147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:57:05.972228  144147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:57:05.982113  144147 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:57:05.986486  144147 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:57:05.986552  144147 kubeadm.go:404] StartCluster: {Name:addons-577685 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-577685 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:57:05.986635  144147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 22:57:05.986706  144147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 22:57:06.024581  144147 cri.go:89] found id: ""
	I1212 22:57:06.024654  144147 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 22:57:06.033608  144147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 22:57:06.045743  144147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 22:57:06.053985  144147 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 22:57:06.054036  144147 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 22:57:06.242375  144147 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 22:57:18.499376  144147 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 22:57:18.499449  144147 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 22:57:18.499518  144147 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 22:57:18.499673  144147 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 22:57:18.499837  144147 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 22:57:18.499929  144147 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 22:57:18.501227  144147 out.go:204]   - Generating certificates and keys ...
	I1212 22:57:18.501313  144147 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 22:57:18.501400  144147 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 22:57:18.501485  144147 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 22:57:18.501575  144147 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 22:57:18.501673  144147 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 22:57:18.501738  144147 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 22:57:18.501804  144147 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 22:57:18.501942  144147 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-577685 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	I1212 22:57:18.502013  144147 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 22:57:18.502149  144147 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-577685 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	I1212 22:57:18.502261  144147 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 22:57:18.502364  144147 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 22:57:18.502427  144147 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 22:57:18.502494  144147 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 22:57:18.502568  144147 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 22:57:18.502634  144147 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 22:57:18.502711  144147 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 22:57:18.502790  144147 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 22:57:18.502890  144147 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 22:57:18.502973  144147 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 22:57:18.504540  144147 out.go:204]   - Booting up control plane ...
	I1212 22:57:18.504648  144147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 22:57:18.504747  144147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 22:57:18.504834  144147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 22:57:18.504961  144147 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:57:18.505090  144147 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:57:18.505133  144147 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 22:57:18.505254  144147 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 22:57:18.505333  144147 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002432 seconds
	I1212 22:57:18.505434  144147 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 22:57:18.505538  144147 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 22:57:18.505592  144147 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 22:57:18.505739  144147 kubeadm.go:322] [mark-control-plane] Marking the node addons-577685 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 22:57:18.505792  144147 kubeadm.go:322] [bootstrap-token] Using token: 88t7s4.63cui5zc8e14zyw6
	I1212 22:57:18.507078  144147 out.go:204]   - Configuring RBAC rules ...
	I1212 22:57:18.507185  144147 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 22:57:18.507297  144147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 22:57:18.507463  144147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 22:57:18.507569  144147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 22:57:18.507698  144147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 22:57:18.507819  144147 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 22:57:18.507948  144147 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 22:57:18.508015  144147 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 22:57:18.508098  144147 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 22:57:18.508110  144147 kubeadm.go:322] 
	I1212 22:57:18.508190  144147 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 22:57:18.508200  144147 kubeadm.go:322] 
	I1212 22:57:18.508298  144147 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 22:57:18.508310  144147 kubeadm.go:322] 
	I1212 22:57:18.508355  144147 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 22:57:18.508455  144147 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 22:57:18.508536  144147 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 22:57:18.508546  144147 kubeadm.go:322] 
	I1212 22:57:18.508607  144147 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 22:57:18.508614  144147 kubeadm.go:322] 
	I1212 22:57:18.508673  144147 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 22:57:18.508677  144147 kubeadm.go:322] 
	I1212 22:57:18.508717  144147 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 22:57:18.508819  144147 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 22:57:18.508888  144147 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 22:57:18.508895  144147 kubeadm.go:322] 
	I1212 22:57:18.508967  144147 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 22:57:18.509033  144147 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 22:57:18.509038  144147 kubeadm.go:322] 
	I1212 22:57:18.509108  144147 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 88t7s4.63cui5zc8e14zyw6 \
	I1212 22:57:18.509193  144147 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1212 22:57:18.509235  144147 kubeadm.go:322] 	--control-plane 
	I1212 22:57:18.509251  144147 kubeadm.go:322] 
	I1212 22:57:18.509355  144147 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 22:57:18.509366  144147 kubeadm.go:322] 
	I1212 22:57:18.509450  144147 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 88t7s4.63cui5zc8e14zyw6 \
	I1212 22:57:18.509555  144147 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1212 22:57:18.509568  144147 cni.go:84] Creating CNI manager for ""
	I1212 22:57:18.509574  144147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:57:18.512678  144147 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 22:57:18.513996  144147 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 22:57:18.534414  144147 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 22:57:18.569031  144147 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 22:57:18.569107  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:18.569143  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=addons-577685 minikube.k8s.io/updated_at=2023_12_12T22_57_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:18.721288  144147 ops.go:34] apiserver oom_adj: -16
	I1212 22:57:18.727867  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:18.829417  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:19.432675  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:19.932553  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:20.433021  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:20.932177  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:21.433141  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:21.932136  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:22.432223  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:22.932508  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:23.432759  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:23.932646  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:24.433194  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:24.932717  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:25.432461  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:25.932837  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:26.432525  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:26.932908  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:27.432570  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:27.933086  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:28.432886  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:28.932749  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:29.432065  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:29.932920  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:30.432176  144147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:57:30.544796  144147 kubeadm.go:1088] duration metric: took 11.975748409s to wait for elevateKubeSystemPrivileges.
	I1212 22:57:30.544830  144147 kubeadm.go:406] StartCluster complete in 24.55828369s
	I1212 22:57:30.544852  144147 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:57:30.544980  144147 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 22:57:30.545342  144147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:57:30.545545  144147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 22:57:30.545681  144147 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1212 22:57:30.545772  144147 config.go:182] Loaded profile config "addons-577685": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:57:30.545784  144147 addons.go:69] Setting inspektor-gadget=true in profile "addons-577685"
	I1212 22:57:30.545801  144147 addons.go:231] Setting addon inspektor-gadget=true in "addons-577685"
	I1212 22:57:30.545775  144147 addons.go:69] Setting volumesnapshots=true in profile "addons-577685"
	I1212 22:57:30.545810  144147 addons.go:69] Setting gcp-auth=true in profile "addons-577685"
	I1212 22:57:30.545833  144147 addons.go:69] Setting cloud-spanner=true in profile "addons-577685"
	I1212 22:57:30.545834  144147 addons.go:69] Setting metrics-server=true in profile "addons-577685"
	I1212 22:57:30.545842  144147 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-577685"
	I1212 22:57:30.545843  144147 addons.go:69] Setting helm-tiller=true in profile "addons-577685"
	I1212 22:57:30.545859  144147 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-577685"
	I1212 22:57:30.545859  144147 addons.go:231] Setting addon metrics-server=true in "addons-577685"
	I1212 22:57:30.545862  144147 addons.go:231] Setting addon cloud-spanner=true in "addons-577685"
	I1212 22:57:30.545870  144147 mustload.go:65] Loading cluster: addons-577685
	I1212 22:57:30.545876  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.546322  144147 addons.go:231] Setting addon helm-tiller=true in "addons-577685"
	I1212 22:57:30.546385  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.546399  144147 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-577685"
	I1212 22:57:30.546467  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.546489  144147 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-577685"
	I1212 22:57:30.546500  144147 addons.go:69] Setting storage-provisioner=true in profile "addons-577685"
	I1212 22:57:30.546514  144147 addons.go:231] Setting addon storage-provisioner=true in "addons-577685"
	I1212 22:57:30.546524  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.546543  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.546643  144147 config.go:182] Loaded profile config "addons-577685": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:57:30.546778  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.546796  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.546839  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.546849  144147 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-577685"
	I1212 22:57:30.546388  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.546873  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.546874  144147 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-577685"
	I1212 22:57:30.546880  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.546892  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.546917  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.545818  144147 addons.go:69] Setting default-storageclass=true in profile "addons-577685"
	I1212 22:57:30.546969  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.546977  144147 addons.go:69] Setting ingress=true in profile "addons-577685"
	I1212 22:57:30.546989  144147 addons.go:231] Setting addon ingress=true in "addons-577685"
	I1212 22:57:30.546992  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.545796  144147 addons.go:69] Setting ingress-dns=true in profile "addons-577685"
	I1212 22:57:30.547013  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.547017  144147 addons.go:231] Setting addon ingress-dns=true in "addons-577685"
	I1212 22:57:30.546971  144147 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-577685"
	I1212 22:57:30.545830  144147 addons.go:69] Setting registry=true in profile "addons-577685"
	I1212 22:57:30.547074  144147 addons.go:231] Setting addon registry=true in "addons-577685"
	I1212 22:57:30.545835  144147 addons.go:231] Setting addon volumesnapshots=true in "addons-577685"
	I1212 22:57:30.546839  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.547208  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.547262  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.547360  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.547437  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.547457  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.547514  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.547533  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.547575  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.547582  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.547610  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.547794  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.547895  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.547924  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.548136  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.548168  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.548230  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.548252  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.548322  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.548359  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.548403  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.549803  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.549951  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.566976  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37969
	I1212 22:57:30.567163  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40685
	I1212 22:57:30.567256  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46747
	I1212 22:57:30.567405  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38839
	I1212 22:57:30.567569  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.567983  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.568115  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.568136  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.568461  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.568480  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.568480  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.569343  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I1212 22:57:30.569394  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.569506  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.569512  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.569701  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.569841  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.569867  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.570113  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.570156  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.570458  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.570475  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.570610  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.570622  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.571049  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.571565  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.571592  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.572031  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.572612  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.572645  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.573129  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.573170  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.573610  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.574287  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.574355  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.587023  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I1212 22:57:30.587565  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.588146  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.588166  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.588535  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.589067  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.589106  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.591095  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I1212 22:57:30.591482  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.591963  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.591983  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.592309  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.592883  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.592963  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.595264  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I1212 22:57:30.595327  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I1212 22:57:30.595708  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.595801  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.596389  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.596410  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.596574  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.596594  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.596945  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.597152  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.598082  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.598713  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.598781  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45865
	I1212 22:57:30.599519  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.602067  144147 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 22:57:30.600004  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.602691  144147 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-577685"
	I1212 22:57:30.603542  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.603573  144147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 22:57:30.603590  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 22:57:30.603611  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:30.603962  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.603992  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44035
	I1212 22:57:30.604038  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.604056  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.604095  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.604590  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.605085  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.605105  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.605459  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.605675  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.606456  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.606703  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.607841  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.608054  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.608131  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39181
	I1212 22:57:30.610062  144147 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1212 22:57:30.609217  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:30.609233  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:30.609268  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.609667  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.611093  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44901
	I1212 22:57:30.611761  144147 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1212 22:57:30.611774  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 22:57:30.611790  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:30.611880  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.613828  144147 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1212 22:57:30.612216  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:30.612250  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I1212 22:57:30.612773  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.613770  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.613934  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43083
	I1212 22:57:30.615470  144147 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 22:57:30.615484  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 22:57:30.615644  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.616042  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:30.616233  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:30.616293  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.616309  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.616330  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.616388  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:30.616680  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.617601  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.617643  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.617818  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.617977  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.618401  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.618427  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.618777  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.619320  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.619362  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.625104  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:30.625120  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35307
	I1212 22:57:30.625199  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:30.625219  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.625248  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33045
	I1212 22:57:30.625253  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42321
	I1212 22:57:30.625555  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.625641  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:30.625821  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:30.625999  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:30.626042  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.626010  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.626211  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.626456  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.626481  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.626555  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.626838  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.626955  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.627009  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.627047  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.627247  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.627857  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:30.627902  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.627967  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.627983  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.628159  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:30.628314  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:30.628447  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:30.629490  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.629509  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.629519  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:30.629578  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.629628  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33671
	I1212 22:57:30.630632  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.630669  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.630891  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.631236  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.631293  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.631455  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.631480  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.631547  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.631577  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35637
	I1212 22:57:30.633925  144147 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1212 22:57:30.632475  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.632708  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.632853  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.633042  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.635327  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.635363  144147 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1212 22:57:30.635380  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1212 22:57:30.635396  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:30.635424  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.635634  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.635639  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.636354  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.636539  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.636556  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.636904  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.637352  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.637378  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.639056  144147 addons.go:231] Setting addon default-storageclass=true in "addons-577685"
	I1212 22:57:30.639100  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:30.639402  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.639467  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.639502  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.639946  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:30.640000  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.640296  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:30.640301  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.640500  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:30.642173  144147 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 22:57:30.640847  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:30.645626  144147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 22:57:30.643880  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I1212 22:57:30.644018  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39109
	I1212 22:57:30.644185  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:30.646511  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44195
	I1212 22:57:30.646762  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.647018  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.647998  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.648053  144147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 22:57:30.649472  144147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 22:57:30.648523  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.648618  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.648697  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.650758  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.652277  144147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 22:57:30.650876  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.650889  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.651250  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.652053  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42109
	I1212 22:57:30.655827  144147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 22:57:30.654058  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.654128  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.654249  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.654487  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.658373  144147 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 22:57:30.657330  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.657443  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.657694  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.658920  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.661636  144147 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 22:57:30.659856  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.661109  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I1212 22:57:30.661846  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.662055  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.662352  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I1212 22:57:30.662919  144147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 22:57:30.663224  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.663996  144147 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1212 22:57:30.665414  144147 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1212 22:57:30.665429  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1212 22:57:30.665446  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:30.664098  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 22:57:30.665502  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:30.664358  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.667716  144147 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:57:30.664586  144147 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-577685" context rescaled to 1 replicas
	I1212 22:57:30.665282  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.666029  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I1212 22:57:30.666283  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.667582  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.669073  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.670742  144147 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:57:30.669185  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.669481  144147 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:57:30.669519  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:30.669443  144147 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:57:30.669871  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:30.669895  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.669947  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.670026  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:30.670259  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.672057  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.672075  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:30.672258  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:30.672599  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.673268  144147 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1212 22:57:30.673320  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.673323  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.673436  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:30.674535  144147 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1212 22:57:30.674550  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.676553  144147 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:57:30.676603  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 22:57:30.676623  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:30.676675  144147 out.go:177] * Verifying Kubernetes components...
	I1212 22:57:30.678156  144147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:57:30.676832  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.679559  144147 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 22:57:30.679575  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 22:57:30.679593  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:30.681082  144147 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 22:57:30.681129  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1212 22:57:30.681147  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:30.677268  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:30.677297  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:30.677834  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.677837  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.678634  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.680400  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.680828  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36909
	I1212 22:57:30.681381  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:30.681421  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:30.681438  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.681907  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:30.681978  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.682026  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.682069  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:30.682567  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.682587  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:30.682661  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43215
	I1212 22:57:30.683235  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.683261  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:30.683420  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:30.683506  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:30.683522  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.683625  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.683863  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.684014  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.685673  144147 out.go:177]   - Using image docker.io/busybox:stable
	I1212 22:57:30.686959  144147 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 22:57:30.688316  144147 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 22:57:30.686133  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.688329  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 22:57:30.684766  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.688344  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:30.688360  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:30.684902  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.685543  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.688385  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.688396  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.684662  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:30.686713  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:30.688381  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.688721  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:30.688722  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:30.690343  144147 out.go:177]   - Using image docker.io/registry:2.8.3
	I1212 22:57:30.688811  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.688928  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:30.688953  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:30.689101  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.690725  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.691653  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:30.691671  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.692920  144147 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1212 22:57:30.691255  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:30.691830  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:30.691853  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.691875  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:30.692176  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:30.694190  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:30.694877  144147 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 22:57:30.694898  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1212 22:57:30.694912  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:30.695486  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:30.695709  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:30.695840  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:30.696877  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.698940  144147 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1212 22:57:30.697985  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.698631  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:30.700303  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:30.700315  144147 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 22:57:30.700326  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1212 22:57:30.700331  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.700338  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:30.700591  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:30.700735  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:30.700834  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:30.703138  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.703466  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:30.703490  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.703638  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:30.703809  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:30.703938  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:30.704051  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	W1212 22:57:30.705069  144147 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37112->192.168.39.136:22: read: connection reset by peer
	I1212 22:57:30.705094  144147 retry.go:31] will retry after 305.537648ms: ssh: handshake failed: read tcp 192.168.39.1:37112->192.168.39.136:22: read: connection reset by peer
	I1212 22:57:30.712698  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I1212 22:57:30.713038  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:30.713448  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:30.713471  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:30.714019  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:30.714175  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:30.715723  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:30.716006  144147 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 22:57:30.716020  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 22:57:30.716032  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:30.718849  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.719236  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:30.719260  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:30.719406  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:30.719570  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:30.719712  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:30.719813  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:30.867059  144147 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 22:57:30.867081  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 22:57:30.881001  144147 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 22:57:30.881025  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 22:57:30.899711  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 22:57:30.900286  144147 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1212 22:57:30.900302  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1212 22:57:30.915297  144147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 22:57:30.915989  144147 node_ready.go:35] waiting up to 6m0s for node "addons-577685" to be "Ready" ...
	I1212 22:57:30.938512  144147 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 22:57:30.938536  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 22:57:30.956860  144147 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 22:57:30.956890  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 22:57:30.961560  144147 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1212 22:57:30.961584  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1212 22:57:30.971075  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:57:30.987549  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 22:57:31.015261  144147 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 22:57:31.015292  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 22:57:31.040111  144147 node_ready.go:49] node "addons-577685" has status "Ready":"True"
	I1212 22:57:31.040142  144147 node_ready.go:38] duration metric: took 124.129695ms waiting for node "addons-577685" to be "Ready" ...
	I1212 22:57:31.040157  144147 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:57:31.076640  144147 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-577685" in "kube-system" namespace to be "Ready" ...
	I1212 22:57:31.096581  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 22:57:31.098279  144147 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1212 22:57:31.098301  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1212 22:57:31.121187  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 22:57:31.136020  144147 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1212 22:57:31.136048  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1212 22:57:31.165168  144147 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 22:57:31.165191  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 22:57:31.186116  144147 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 22:57:31.186139  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 22:57:31.208452  144147 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 22:57:31.208479  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 22:57:31.208988  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 22:57:31.262853  144147 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 22:57:31.262878  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 22:57:31.278754  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1212 22:57:31.303354  144147 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1212 22:57:31.303377  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1212 22:57:31.333018  144147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 22:57:31.333040  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 22:57:31.373095  144147 pod_ready.go:92] pod "etcd-addons-577685" in "kube-system" namespace has status "Ready":"True"
	I1212 22:57:31.373119  144147 pod_ready.go:81] duration metric: took 296.442677ms waiting for pod "etcd-addons-577685" in "kube-system" namespace to be "Ready" ...
	I1212 22:57:31.373129  144147 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-577685" in "kube-system" namespace to be "Ready" ...
	I1212 22:57:31.403711  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 22:57:31.429439  144147 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 22:57:31.429469  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 22:57:31.447883  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 22:57:31.477252  144147 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1212 22:57:31.477273  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1212 22:57:31.504966  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 22:57:31.517785  144147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 22:57:31.517810  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 22:57:31.620188  144147 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:57:31.620214  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 22:57:31.657565  144147 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1212 22:57:31.657596  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1212 22:57:31.685906  144147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 22:57:31.685927  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 22:57:31.746127  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:57:31.807249  144147 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 22:57:31.807275  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 22:57:31.811494  144147 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1212 22:57:31.811517  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1212 22:57:31.879984  144147 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 22:57:31.880018  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1212 22:57:31.886278  144147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 22:57:31.886299  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 22:57:31.962846  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 22:57:31.991623  144147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 22:57:31.991654  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 22:57:32.031936  144147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 22:57:32.031965  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 22:57:32.099346  144147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 22:57:32.099368  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 22:57:32.151200  144147 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 22:57:32.151227  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 22:57:32.174476  144147 pod_ready.go:92] pod "kube-apiserver-addons-577685" in "kube-system" namespace has status "Ready":"True"
	I1212 22:57:32.174501  144147 pod_ready.go:81] duration metric: took 801.365182ms waiting for pod "kube-apiserver-addons-577685" in "kube-system" namespace to be "Ready" ...
	I1212 22:57:32.174516  144147 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-577685" in "kube-system" namespace to be "Ready" ...
	I1212 22:57:32.189429  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 22:57:32.292197  144147 pod_ready.go:92] pod "kube-controller-manager-addons-577685" in "kube-system" namespace has status "Ready":"True"
	I1212 22:57:32.292240  144147 pod_ready.go:81] duration metric: took 117.713383ms waiting for pod "kube-controller-manager-addons-577685" in "kube-system" namespace to be "Ready" ...
	I1212 22:57:32.292258  144147 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2cptn" in "kube-system" namespace to be "Ready" ...
	I1212 22:57:34.398429  144147 pod_ready.go:102] pod "kube-proxy-2cptn" in "kube-system" namespace has status "Ready":"False"
	I1212 22:57:36.835574  144147 pod_ready.go:102] pod "kube-proxy-2cptn" in "kube-system" namespace has status "Ready":"False"
	I1212 22:57:37.486765  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.587012651s)
	I1212 22:57:37.486788  144147 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.571442386s)
	I1212 22:57:37.486814  144147 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 22:57:37.486824  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:37.486838  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:37.487199  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:37.487269  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:37.487288  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:37.487303  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:37.487315  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:37.487665  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:37.487682  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:38.189986  144147 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 22:57:38.190018  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:38.193080  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:38.193555  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:38.193583  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:38.193790  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:38.194010  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:38.194184  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:38.194358  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:38.459970  144147 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 22:57:38.510251  144147 addons.go:231] Setting addon gcp-auth=true in "addons-577685"
	I1212 22:57:38.510315  144147 host.go:66] Checking if "addons-577685" exists ...
	I1212 22:57:38.510638  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:38.510689  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:38.526091  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
	I1212 22:57:38.526560  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:38.527024  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:38.527046  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:38.527368  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:38.527935  144147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:57:38.527999  144147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:57:38.563746  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.592629088s)
	I1212 22:57:38.563806  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:38.563824  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:38.563826  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.576244493s)
	I1212 22:57:38.563864  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:38.563881  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:38.563935  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.467329212s)
	I1212 22:57:38.564054  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:38.564224  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:38.564124  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:38.564167  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:38.564171  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:38.564393  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:38.564426  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:38.564200  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:38.564482  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:38.564497  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:38.564512  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:38.564457  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:38.564674  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:38.564714  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:38.564723  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:38.564734  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:38.564742  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:38.564852  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:38.564885  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:38.564895  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:38.564943  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:38.564953  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:38.565193  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:38.565232  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:38.565241  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:38.569313  144147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33943
	I1212 22:57:38.569778  144147 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:57:38.570269  144147 main.go:141] libmachine: Using API Version  1
	I1212 22:57:38.570300  144147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:57:38.570698  144147 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:57:38.570873  144147 main.go:141] libmachine: (addons-577685) Calling .GetState
	I1212 22:57:38.572671  144147 main.go:141] libmachine: (addons-577685) Calling .DriverName
	I1212 22:57:38.572869  144147 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 22:57:38.572888  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHHostname
	I1212 22:57:38.575899  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:38.576288  144147 main.go:141] libmachine: (addons-577685) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:15:31", ip: ""} in network mk-addons-577685: {Iface:virbr1 ExpiryTime:2023-12-12 23:56:49 +0000 UTC Type:0 Mac:52:54:00:83:15:31 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-577685 Clientid:01:52:54:00:83:15:31}
	I1212 22:57:38.576317  144147 main.go:141] libmachine: (addons-577685) DBG | domain addons-577685 has defined IP address 192.168.39.136 and MAC address 52:54:00:83:15:31 in network mk-addons-577685
	I1212 22:57:38.576461  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHPort
	I1212 22:57:38.576623  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHKeyPath
	I1212 22:57:38.576758  144147 main.go:141] libmachine: (addons-577685) Calling .GetSSHUsername
	I1212 22:57:38.576909  144147 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/addons-577685/id_rsa Username:docker}
	I1212 22:57:38.676604  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:38.676630  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:38.676957  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:38.676991  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:39.126222  144147 pod_ready.go:102] pod "kube-proxy-2cptn" in "kube-system" namespace has status "Ready":"False"
	I1212 22:57:40.094378  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.973160022s)
	I1212 22:57:40.094426  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.094437  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.094457  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.885445484s)
	I1212 22:57:40.094495  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.094507  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.094552  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.815765858s)
	I1212 22:57:40.094590  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.094605  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.094689  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.094702  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.094705  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.690956834s)
	I1212 22:57:40.094719  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.094737  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.094752  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.094773  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.646861058s)
	I1212 22:57:40.094795  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.094711  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.094814  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.094803  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.094845  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.589846991s)
	I1212 22:57:40.094857  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.094867  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.094907  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.094926  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.094937  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.094947  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.094957  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.094959  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.094981  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.094967  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.094991  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.095001  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.095034  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.348876113s)
	W1212 22:57:40.095062  144147 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 22:57:40.095080  144147 retry.go:31] will retry after 305.517166ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 22:57:40.095116  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.132237571s)
	I1212 22:57:40.095136  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.095145  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.095165  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.095188  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.095196  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.095205  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.095209  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.095216  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.095218  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.095224  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.095227  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.095236  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.095300  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.095324  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.095334  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.095623  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.095676  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.095685  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.095694  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.095702  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.095965  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.095996  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.096018  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.096028  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.096036  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.096249  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.096271  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.096279  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.096394  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.096418  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.096426  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.096451  144147 addons.go:467] Verifying addon metrics-server=true in "addons-577685"
	I1212 22:57:40.096518  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.096541  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.096549  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.096554  144147 addons.go:467] Verifying addon registry=true in "addons-577685"
	I1212 22:57:40.098488  144147 out.go:177] * Verifying registry addon...
	I1212 22:57:40.096759  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.096783  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.096816  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.096835  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.097344  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.097379  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.099986  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.100004  144147 addons.go:467] Verifying addon ingress=true in "addons-577685"
	I1212 22:57:40.100014  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.100013  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.101471  144147 out.go:177] * Verifying ingress addon...
	I1212 22:57:40.101028  144147 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 22:57:40.103526  144147 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 22:57:40.123125  144147 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 22:57:40.123146  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:40.137908  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:40.139047  144147 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 22:57:40.139061  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:40.143785  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.143805  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.144072  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.144087  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.165469  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:40.400985  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:57:40.614983  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.425504115s)
	I1212 22:57:40.615019  144147 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.042127372s)
	I1212 22:57:40.615033  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.615046  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.616817  144147 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:57:40.615351  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.615378  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:40.619798  144147 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1212 22:57:40.618349  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.621198  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:40.621211  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:40.621251  144147 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 22:57:40.621270  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 22:57:40.621482  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:40.621498  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:40.621508  144147 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-577685"
	I1212 22:57:40.622997  144147 out.go:177] * Verifying csi-hostpath-driver addon...
	I1212 22:57:40.624905  144147 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 22:57:40.715283  144147 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 22:57:40.715303  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 22:57:40.781253  144147 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 22:57:40.781275  144147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1212 22:57:40.873969  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:40.874574  144147 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 22:57:40.874607  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:40.908775  144147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 22:57:40.942051  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:40.987027  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:41.267880  144147 pod_ready.go:102] pod "kube-proxy-2cptn" in "kube-system" namespace has status "Ready":"False"
	I1212 22:57:41.278944  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:41.279737  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:41.498996  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:41.754006  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:41.769930  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:41.993464  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:42.158040  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:42.196074  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:42.497361  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:42.621912  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.220823456s)
	I1212 22:57:42.621976  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:42.621989  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:42.622355  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:42.622375  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:42.622389  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:42.622411  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:42.622424  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:42.622673  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:42.622703  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:42.622718  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:42.642933  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:42.680787  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:42.994099  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:43.166055  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:43.189370  144147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.28054929s)
	I1212 22:57:43.189424  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:43.189439  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:43.189760  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:43.189779  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:43.189795  144147 main.go:141] libmachine: Making call to close driver server
	I1212 22:57:43.189805  144147 main.go:141] libmachine: (addons-577685) Calling .Close
	I1212 22:57:43.189803  144147 main.go:141] libmachine: (addons-577685) DBG | Closing plugin on server side
	I1212 22:57:43.190032  144147 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:57:43.190051  144147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:57:43.191897  144147 addons.go:467] Verifying addon gcp-auth=true in "addons-577685"
	I1212 22:57:43.193440  144147 out.go:177] * Verifying gcp-auth addon...
	I1212 22:57:43.195589  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:43.195874  144147 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 22:57:43.213463  144147 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 22:57:43.213482  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:43.222447  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:43.423369  144147 pod_ready.go:102] pod "kube-proxy-2cptn" in "kube-system" namespace has status "Ready":"False"
	I1212 22:57:43.507824  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:43.665443  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:43.672475  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:43.727336  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:44.005312  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:44.143202  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:44.175808  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:44.227028  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:44.492519  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:44.642959  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:44.670415  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:44.728447  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:44.911501  144147 pod_ready.go:92] pod "kube-proxy-2cptn" in "kube-system" namespace has status "Ready":"True"
	I1212 22:57:44.911523  144147 pod_ready.go:81] duration metric: took 12.619258016s waiting for pod "kube-proxy-2cptn" in "kube-system" namespace to be "Ready" ...
	I1212 22:57:44.911532  144147 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-577685" in "kube-system" namespace to be "Ready" ...
	I1212 22:57:44.923918  144147 pod_ready.go:92] pod "kube-scheduler-addons-577685" in "kube-system" namespace has status "Ready":"True"
	I1212 22:57:44.923939  144147 pod_ready.go:81] duration metric: took 12.400877ms waiting for pod "kube-scheduler-addons-577685" in "kube-system" namespace to be "Ready" ...
	I1212 22:57:44.923946  144147 pod_ready.go:38] duration metric: took 13.883775244s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:57:44.923966  144147 api_server.go:52] waiting for apiserver process to appear ...
	I1212 22:57:44.924017  144147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:57:44.974895  144147 api_server.go:72] duration metric: took 14.302789159s to wait for apiserver process to appear ...
	I1212 22:57:44.974919  144147 api_server.go:88] waiting for apiserver healthz status ...
	I1212 22:57:44.974941  144147 api_server.go:253] Checking apiserver healthz at https://192.168.39.136:8443/healthz ...
	I1212 22:57:44.983141  144147 api_server.go:279] https://192.168.39.136:8443/healthz returned 200:
	ok
	I1212 22:57:44.986233  144147 api_server.go:141] control plane version: v1.28.4
	I1212 22:57:44.986250  144147 api_server.go:131] duration metric: took 11.324442ms to wait for apiserver health ...
	I1212 22:57:44.986258  144147 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 22:57:45.007618  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:45.009937  144147 system_pods.go:59] 18 kube-system pods found
	I1212 22:57:45.009968  144147 system_pods.go:61] "coredns-5dd5756b68-5p4zl" [a3248804-9725-4b9d-8781-e2881ea46ca7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 22:57:45.009984  144147 system_pods.go:61] "csi-hostpath-attacher-0" [805a8932-8e71-47da-be7f-45f22adc389c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 22:57:45.009996  144147 system_pods.go:61] "csi-hostpath-resizer-0" [3e63f7c6-1d2b-49bd-aa50-ec2b6779e45c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 22:57:45.010005  144147 system_pods.go:61] "csi-hostpathplugin-xzcf7" [3f8483a1-2db5-4d53-bc9f-13b2840836b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 22:57:45.010014  144147 system_pods.go:61] "etcd-addons-577685" [6d0a2d99-136f-4f3f-a59c-a48809318d74] Running
	I1212 22:57:45.010021  144147 system_pods.go:61] "kube-apiserver-addons-577685" [9d8e422d-bc18-48b9-a882-fa31e046fdb8] Running
	I1212 22:57:45.010029  144147 system_pods.go:61] "kube-controller-manager-addons-577685" [6d735e9f-2942-4f4e-8197-2810a878515d] Running
	I1212 22:57:45.010044  144147 system_pods.go:61] "kube-ingress-dns-minikube" [51a0e5db-f928-4b3f-acef-f9d813ba4965] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 22:57:45.010053  144147 system_pods.go:61] "kube-proxy-2cptn" [d540fce8-3ea8-46bc-9484-1348f43f1f3b] Running
	I1212 22:57:45.010062  144147 system_pods.go:61] "kube-scheduler-addons-577685" [eb70b91d-ccd6-4205-8314-604bc9eccaa1] Running
	I1212 22:57:45.010072  144147 system_pods.go:61] "metrics-server-7c66d45ddc-lclrb" [55901824-a685-464c-908b-469b9b6eb95f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 22:57:45.010090  144147 system_pods.go:61] "nvidia-device-plugin-daemonset-knlgj" [44d91221-4176-4754-8d10-d474c4c15c2f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 22:57:45.010101  144147 system_pods.go:61] "registry-hqwg4" [d0bf4dcc-a461-4ab3-b7cd-a50f0b4d61c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 22:57:45.010115  144147 system_pods.go:61] "registry-proxy-fptb7" [c0a8fb28-ceaa-4e60-8815-9440f1f663a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 22:57:45.010129  144147 system_pods.go:61] "snapshot-controller-58dbcc7b99-9skl6" [d7465fca-56ba-4835-a871-00f68bc478b9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:57:45.010143  144147 system_pods.go:61] "snapshot-controller-58dbcc7b99-qm6xr" [8b88bf00-936b-43e7-a858-260a414bfb1b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:57:45.010156  144147 system_pods.go:61] "storage-provisioner" [df257cb0-9230-401c-bfd9-e8d93b09c2dd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 22:57:45.010169  144147 system_pods.go:61] "tiller-deploy-7b677967b9-kjkq6" [d4f500ad-4a08-4478-af71-f772ba964f09] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1212 22:57:45.010182  144147 system_pods.go:74] duration metric: took 23.916259ms to wait for pod list to return data ...
	I1212 22:57:45.010196  144147 default_sa.go:34] waiting for default service account to be created ...
	I1212 22:57:45.012229  144147 default_sa.go:45] found service account: "default"
	I1212 22:57:45.012244  144147 default_sa.go:55] duration metric: took 2.038807ms for default service account to be created ...
	I1212 22:57:45.012252  144147 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 22:57:45.028479  144147 system_pods.go:86] 18 kube-system pods found
	I1212 22:57:45.028503  144147 system_pods.go:89] "coredns-5dd5756b68-5p4zl" [a3248804-9725-4b9d-8781-e2881ea46ca7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 22:57:45.028514  144147 system_pods.go:89] "csi-hostpath-attacher-0" [805a8932-8e71-47da-be7f-45f22adc389c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 22:57:45.028526  144147 system_pods.go:89] "csi-hostpath-resizer-0" [3e63f7c6-1d2b-49bd-aa50-ec2b6779e45c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 22:57:45.028538  144147 system_pods.go:89] "csi-hostpathplugin-xzcf7" [3f8483a1-2db5-4d53-bc9f-13b2840836b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 22:57:45.028550  144147 system_pods.go:89] "etcd-addons-577685" [6d0a2d99-136f-4f3f-a59c-a48809318d74] Running
	I1212 22:57:45.028560  144147 system_pods.go:89] "kube-apiserver-addons-577685" [9d8e422d-bc18-48b9-a882-fa31e046fdb8] Running
	I1212 22:57:45.028568  144147 system_pods.go:89] "kube-controller-manager-addons-577685" [6d735e9f-2942-4f4e-8197-2810a878515d] Running
	I1212 22:57:45.028585  144147 system_pods.go:89] "kube-ingress-dns-minikube" [51a0e5db-f928-4b3f-acef-f9d813ba4965] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 22:57:45.028593  144147 system_pods.go:89] "kube-proxy-2cptn" [d540fce8-3ea8-46bc-9484-1348f43f1f3b] Running
	I1212 22:57:45.028602  144147 system_pods.go:89] "kube-scheduler-addons-577685" [eb70b91d-ccd6-4205-8314-604bc9eccaa1] Running
	I1212 22:57:45.028613  144147 system_pods.go:89] "metrics-server-7c66d45ddc-lclrb" [55901824-a685-464c-908b-469b9b6eb95f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 22:57:45.028625  144147 system_pods.go:89] "nvidia-device-plugin-daemonset-knlgj" [44d91221-4176-4754-8d10-d474c4c15c2f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 22:57:45.028636  144147 system_pods.go:89] "registry-hqwg4" [d0bf4dcc-a461-4ab3-b7cd-a50f0b4d61c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 22:57:45.028648  144147 system_pods.go:89] "registry-proxy-fptb7" [c0a8fb28-ceaa-4e60-8815-9440f1f663a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 22:57:45.028662  144147 system_pods.go:89] "snapshot-controller-58dbcc7b99-9skl6" [d7465fca-56ba-4835-a871-00f68bc478b9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:57:45.028674  144147 system_pods.go:89] "snapshot-controller-58dbcc7b99-qm6xr" [8b88bf00-936b-43e7-a858-260a414bfb1b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:57:45.028686  144147 system_pods.go:89] "storage-provisioner" [df257cb0-9230-401c-bfd9-e8d93b09c2dd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 22:57:45.028697  144147 system_pods.go:89] "tiller-deploy-7b677967b9-kjkq6" [d4f500ad-4a08-4478-af71-f772ba964f09] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1212 22:57:45.028710  144147 system_pods.go:126] duration metric: took 16.452275ms to wait for k8s-apps to be running ...
	I1212 22:57:45.028721  144147 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:57:45.028773  144147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:57:45.062100  144147 system_svc.go:56] duration metric: took 33.370399ms WaitForService to wait for kubelet.
	I1212 22:57:45.062125  144147 kubeadm.go:581] duration metric: took 14.390023839s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:57:45.062148  144147 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:57:45.074434  144147 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:57:45.074494  144147 node_conditions.go:123] node cpu capacity is 2
	I1212 22:57:45.074508  144147 node_conditions.go:105] duration metric: took 12.354192ms to run NodePressure ...
	I1212 22:57:45.074520  144147 start.go:228] waiting for startup goroutines ...
	I1212 22:57:45.148061  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:45.192952  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:45.246089  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:45.493540  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:45.645665  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:45.686276  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:45.731182  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:45.993096  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:46.145950  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:46.174186  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:46.226560  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:46.493072  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:46.642961  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:46.670892  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:46.727340  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:46.992588  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:47.144583  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:47.194807  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:47.230532  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:47.500941  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:47.642960  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:47.671793  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:47.730603  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:47.993862  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:48.142699  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:48.185492  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:48.227788  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:48.493993  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:48.642845  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:48.670751  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:48.727154  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:48.995056  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:49.142298  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:49.174413  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:49.226852  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:49.495954  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:49.648630  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:49.676856  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:49.727235  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:49.992272  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:50.145410  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:50.173589  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:50.229191  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:50.493161  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:50.643283  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:50.673673  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:50.726285  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:50.999323  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:51.143668  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:51.205684  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:51.226490  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:51.495771  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:51.652010  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:51.681611  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:51.733925  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:52.003651  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:52.158037  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:52.173864  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:52.225743  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:52.492635  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:52.643273  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:52.682790  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:52.731412  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:52.999890  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:53.148180  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:53.171705  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:53.231169  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:53.498273  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:53.643317  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:53.669989  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:53.726131  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:53.998003  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:54.142724  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:54.170682  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:54.226837  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:54.493998  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:54.643602  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:54.670660  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:54.726763  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:54.993171  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:55.144651  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:55.173342  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:55.226919  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:55.498572  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:55.645064  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:55.677098  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:55.726730  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:55.993100  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:56.142660  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:56.175888  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:56.226165  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:56.492691  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:56.642193  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:56.671106  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:56.726455  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:56.995072  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:57.143463  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:57.170860  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:57.226100  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:57.493257  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:57.645186  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:57.670641  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:57.727247  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:57.993864  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:58.147062  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:58.172358  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:58.227755  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:58.497600  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:58.644363  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:58.674577  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:58.727180  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:59.012485  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:59.144334  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:59.182856  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:59.226965  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:59.494144  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:57:59.642348  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:57:59.671940  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:57:59.727649  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:57:59.994127  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:00.146297  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:00.180827  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:00.227103  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:00.494933  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:00.643360  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:00.670523  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:00.726191  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:00.992947  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:01.143340  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:01.174322  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:01.226579  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:01.493941  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:01.644045  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:01.670638  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:01.726816  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:01.993704  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:02.144605  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:02.173449  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:02.226729  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:02.493742  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:02.642905  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:02.671613  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:02.727582  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:02.993252  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:03.142611  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:03.171002  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:03.227337  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:03.493482  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:03.642683  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:03.670116  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:03.727446  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:03.993533  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:04.143612  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:04.172126  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:04.227342  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:04.492895  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:04.646288  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:04.674926  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:04.728080  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:04.995103  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:05.143923  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:05.177342  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:05.227700  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:05.494047  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:05.665128  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:05.673456  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:05.728008  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:05.994011  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:06.143405  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:06.174773  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:06.230945  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:06.493741  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:06.644804  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:06.672120  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:06.727188  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:06.992524  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:07.144343  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:07.170955  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:07.226857  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:07.503216  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:07.645975  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:07.671810  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:07.727185  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:07.992327  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:08.143318  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:08.174661  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:08.226654  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:08.496583  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:08.643234  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:08.671164  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:08.726169  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:08.994748  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:09.143521  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:09.172301  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:09.227602  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:09.494095  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:09.968442  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:09.968946  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:09.968973  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:09.995510  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:10.144161  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:10.175875  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:10.227045  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:10.496045  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:10.643254  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:10.671040  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:10.726798  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:10.995575  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:11.144258  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:11.174446  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:11.227585  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:11.493980  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:11.643101  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:11.671077  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:11.727117  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:11.993879  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:12.142917  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:12.171176  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:12.226990  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:12.503784  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:12.642301  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:12.670646  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:12.729165  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:12.993825  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:13.147559  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:13.170567  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:13.227078  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:13.502145  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:13.644347  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:13.669904  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:13.727075  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:14.003296  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:14.143595  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:14.175903  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:14.227424  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:14.495035  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:14.643383  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:14.669974  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:14.727056  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:14.997505  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:15.265449  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:15.265799  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:15.267225  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:15.493538  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:15.643669  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:15.671439  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:15.727279  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:15.993660  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:16.144230  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:16.172111  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:16.226015  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:16.497643  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:16.642944  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:16.671688  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:16.728564  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:16.994491  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:17.147082  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:17.194542  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:17.231381  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:17.493221  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:17.643177  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:17.670307  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:17.726503  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:17.993503  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:18.142822  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:18.171422  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:18.226538  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:18.493636  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:18.643593  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:18.670702  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:18.727179  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:18.993715  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:19.143940  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:19.171393  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:19.226968  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:19.493966  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:19.643121  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:19.673727  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:19.727658  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:19.994939  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:20.143601  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:20.188688  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:20.226912  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:20.494698  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:20.643421  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:20.673587  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:20.728011  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:20.993806  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:21.145910  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:21.174726  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:21.227182  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:21.492193  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:21.644289  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:21.672362  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:21.730020  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:21.994826  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:22.143611  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:22.176696  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:22.227002  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:22.493078  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:22.642788  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:22.674753  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:22.726938  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:22.996875  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:23.143533  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:23.172784  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:23.227049  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:23.494073  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:23.643235  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:23.670412  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:23.728648  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:23.994320  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:24.143542  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:24.178778  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:24.227758  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:24.493315  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:24.642931  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:24.671357  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:24.726502  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:24.994046  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:25.143565  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:25.174650  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:25.228330  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:25.498863  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:25.644142  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:25.671516  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:25.727708  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:26.000962  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:26.151568  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:26.182383  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:26.226717  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:26.495189  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:26.645692  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:26.671267  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:26.726619  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:26.995203  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:27.142874  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:27.175106  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:27.227662  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:27.494434  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:27.643353  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:27.671708  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:27.730248  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:27.995489  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:28.143166  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:28.173083  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:28.227120  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:28.493716  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:28.645601  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:28.674360  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:28.734783  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:29.011262  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:29.144517  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:29.170181  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:29.226503  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:29.493328  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:29.642564  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:29.670701  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:29.727569  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:30.002585  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:30.167999  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:30.188918  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:30.239964  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:30.494028  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:30.642806  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:30.671143  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:30.726771  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:30.994714  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:31.143592  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:31.171402  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:31.226734  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:31.494710  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:31.643664  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:31.671684  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:31.727377  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:31.994108  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:32.143856  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:32.172903  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:32.229325  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:32.495083  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:32.644149  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:32.670342  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:32.726973  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:33.009467  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:33.144683  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:33.170486  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:33.226398  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:33.493486  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:33.645766  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:33.671045  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:33.730549  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:33.993940  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:34.142817  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:34.171545  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:34.227632  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:34.495178  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:34.644493  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:34.672649  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:34.727195  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:34.995158  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:35.143328  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:35.172622  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:35.229129  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:35.498569  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:35.644732  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:35.671276  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:35.726426  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:35.994280  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:36.146778  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:36.173287  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:36.226822  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:36.493566  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:36.643741  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:36.671108  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:36.726364  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:36.993532  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:37.143944  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:37.173883  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:37.226486  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:37.493189  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:37.642972  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:37.671623  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:37.727158  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:37.992983  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:38.143569  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:38.171766  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:38.229903  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:38.497954  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:38.643541  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:38.671473  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:38.726926  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:38.995989  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:39.142746  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:39.175451  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:39.227208  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:39.492609  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:39.646753  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:39.671300  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:39.726748  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:39.994529  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:40.143910  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:40.173620  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:40.227237  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:40.493150  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:40.643580  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:40.671131  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:40.726752  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:40.994409  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:41.143135  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:41.170170  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:41.226619  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:41.492959  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:41.642754  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:41.671143  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:41.732047  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:41.993978  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:42.142874  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:42.171547  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:42.226982  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:42.493804  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:42.642851  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:42.671475  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:42.726788  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:42.994589  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:43.143688  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:43.172163  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:43.230215  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:43.495041  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:43.646730  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:43.671031  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:43.726140  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:43.992742  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:44.143092  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:44.173999  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:44.226732  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:44.493900  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:44.643490  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:44.670229  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:44.727273  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:44.993834  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:45.143823  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:45.175394  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:45.227146  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:45.753843  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:45.753969  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:45.754264  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:45.756777  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:45.994114  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:46.146709  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:46.181670  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:46.227395  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:46.494941  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:46.662380  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:46.682342  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:58:46.738251  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:47.000052  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:47.152219  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:47.174760  144147 kapi.go:107] duration metric: took 1m7.073729995s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 22:58:47.227279  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:47.492982  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:47.643049  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:47.726992  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:47.994285  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:48.145719  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:48.231145  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:48.494694  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:48.642796  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:48.726539  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:48.994049  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:49.143872  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:49.227224  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:49.492242  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:49.642858  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:49.726576  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:49.992987  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:50.271717  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:50.284625  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:50.497263  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:50.646847  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:50.729386  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:50.994702  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:51.187938  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:51.251069  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:51.494127  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:51.643352  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:51.726513  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:51.993706  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:52.142636  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:52.235689  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:52.497956  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:52.660740  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:52.726547  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:52.996857  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:53.144386  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:53.236301  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:53.494092  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:53.649749  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:53.726767  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:54.008481  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:54.144952  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:54.230880  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:54.497744  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:54.643742  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:54.735013  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:55.018427  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:55.143878  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:55.234921  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:55.494543  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:55.648275  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:55.727047  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:55.994031  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:56.144266  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:56.243333  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:56.524006  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:56.645722  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:56.726976  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:56.994653  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:57.142305  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:57.226482  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:57.496677  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:57.642832  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:57.727241  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:57.994642  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:58.143124  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:58.227448  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:58.493889  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:58.644043  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:58.880960  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:58.994359  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:58:59.144198  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:59.226561  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:58:59.493622  144147 kapi.go:107] duration metric: took 1m18.86871408s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 22:58:59.643603  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:58:59.726239  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:00.143487  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:00.227465  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:00.645679  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:00.726596  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:01.142770  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:01.227193  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:01.643293  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:01.726160  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:02.143546  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:02.226943  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:02.643147  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:02.726801  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:03.143455  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:03.227027  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:03.642073  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:03.727377  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:04.143627  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:04.227223  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:04.643977  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:04.726817  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:05.144755  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:05.226900  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:05.643453  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:05.728427  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:06.144540  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:06.228500  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:06.644916  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:06.728192  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:07.144146  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:07.227281  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:07.643462  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:07.726665  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:08.145107  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:08.227292  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:08.642770  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:08.726833  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:09.148956  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:09.227216  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:09.643236  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:09.727043  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:10.143086  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:10.226562  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:10.643628  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:10.727474  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:11.144113  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:11.227541  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:11.642566  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:11.727252  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:12.143369  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:12.227130  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:12.645598  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:12.726705  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:13.142861  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:13.226414  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:13.644180  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:13.726477  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:14.146009  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:14.226964  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:14.642775  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:14.727598  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:15.147086  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:15.226478  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:15.645847  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:15.725982  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:16.143699  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:16.226902  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:16.644483  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:16.726175  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:17.143292  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:17.226384  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:17.644730  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:17.726883  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:18.144321  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:18.227711  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:18.643594  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:18.726989  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:19.142345  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:19.229436  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:19.646988  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:19.727265  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:20.144184  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:20.227162  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:20.643311  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:20.727471  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:21.143975  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:21.227217  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:21.643382  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:21.726250  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:22.143182  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:22.226831  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:22.643119  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:22.726304  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:23.145854  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:23.226657  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:23.647229  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:23.726866  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:24.143357  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:24.228481  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:24.643675  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:24.727176  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:25.145688  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:25.227651  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:25.643523  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:25.727254  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:26.143249  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:26.226575  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:26.645732  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:26.727279  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:27.143226  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:27.226628  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:27.647929  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:27.727443  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:28.146532  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:28.228248  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:28.645222  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:28.726430  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:29.146847  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:29.227140  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:29.643139  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:29.726388  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:30.143052  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:30.225788  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:30.642557  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:30.726382  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:31.143492  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:31.226435  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:31.643900  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:31.727898  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:32.144071  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:32.226691  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:32.644300  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:32.727748  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:33.142951  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:33.226612  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:33.647226  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:33.727253  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:34.143287  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:34.227100  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:34.643197  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:34.726657  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:35.143665  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:35.227201  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:35.643017  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:35.727236  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:36.143702  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:36.226114  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:36.643083  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:36.726632  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:37.144458  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:37.227244  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:37.643117  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:37.726873  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:38.147402  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:38.227779  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:38.642687  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:38.726921  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:39.146667  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:39.226582  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:39.644191  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:39.726440  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:40.143575  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:40.227001  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:40.644623  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:40.726941  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:41.142637  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:41.227873  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:41.642932  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:41.727073  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:42.143429  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:42.226668  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:42.661311  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:42.726595  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:43.143093  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:43.226717  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:43.650195  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:43.727277  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:44.143558  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:44.226811  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:44.644701  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:44.726701  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:45.143603  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:45.228313  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:45.643655  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:45.726762  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:46.145639  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:46.227385  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:46.643832  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:46.726973  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:47.142888  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:47.226868  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:47.642825  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:47.728379  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:48.143771  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:48.229747  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:48.647994  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:48.727771  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:49.143209  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:49.228030  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:49.648553  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:49.726673  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:50.143781  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:50.227599  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:50.644236  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:50.727259  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:51.143924  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:51.226720  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:51.644812  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:51.727785  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:52.143528  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:52.230897  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:52.644011  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:52.727509  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:53.143697  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:53.226203  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:53.643320  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:53.726447  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:54.143574  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:54.226697  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:54.644873  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:54.727384  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:55.145553  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:55.226322  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:55.643462  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:55.726744  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:56.142769  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:56.227004  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:56.642673  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:56.726837  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:57.142782  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:57.228008  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:57.642528  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:57.726492  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:58.144732  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:58.226185  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:58.642812  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:58.728296  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:59.142937  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:59.227316  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:59:59.644532  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:59:59.729518  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:00.153744  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:00.231088  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:00.643015  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:00.731318  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:01.143589  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:01.227235  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:01.642955  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:01.728661  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:02.143344  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:02.227890  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:02.646232  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:02.727558  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:03.148014  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:03.227721  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:03.777798  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:03.778614  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:04.143479  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:04.226674  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:04.643745  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:04.726533  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:05.144752  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:05.227397  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:05.644212  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:05.726232  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:06.143297  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:06.226933  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:06.642457  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:06.727945  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:07.142872  144147 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 23:00:07.227808  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:07.646231  144147 kapi.go:107] duration metric: took 2m27.542699374s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 23:00:07.726988  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:08.231694  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:08.726572  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:09.226952  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:09.727697  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:10.247031  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:10.727825  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:11.228963  144147 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 23:00:11.727152  144147 kapi.go:107] duration metric: took 2m28.531269826s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 23:00:11.729032  144147 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-577685 cluster.
	I1212 23:00:11.730701  144147 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 23:00:11.732565  144147 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 23:00:11.734338  144147 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, default-storageclass, helm-tiller, metrics-server, ingress-dns, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1212 23:00:11.735953  144147 addons.go:502] enable addons completed in 2m41.190273673s: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner default-storageclass helm-tiller metrics-server ingress-dns inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1212 23:00:11.735999  144147 start.go:233] waiting for cluster config update ...
	I1212 23:00:11.736025  144147 start.go:242] writing updated cluster config ...
	I1212 23:00:11.736342  144147 ssh_runner.go:195] Run: rm -f paused
	I1212 23:00:11.790980  144147 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 23:00:11.793200  144147 out.go:177] * Done! kubectl is now configured to use "addons-577685" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 22:56:46 UTC, ends at Tue 2023-12-12 23:03:06 UTC. --
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.689562869Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702422186689546773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543773,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=6bf3f647-6e05-461b-a535-babfc1304815 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.690109250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a21a752d-388e-455c-bdb1-ede0485bb8c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.690246839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a21a752d-388e-455c-bdb1-ede0485bb8c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.690652564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f973a2435ac22a637e141edf896a62c9d462ac1dcd2bdc8b2d7b4f08b610215,PodSandboxId:5ef0ff1a8c56c6f70cd831ab3fd87fc2b214d1378fec10f4903d591d4f5f0c2d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702422179520846580,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-znngs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a47983c5-b656-4144-9afc-8813e008ff8e,},Annotations:map[string]string{io.kubernetes.container.hash: babf058c,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81957d9e0d8dc2e26fd3f5f6b0e75ae047a689a285da396cb2ab55fa522e5a63,PodSandboxId:d024bfcb7af46904d95a8635429f52543bbd2786b8eebfff9c1c1fce61abe1ce,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1702422055156091765,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-2dshq,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: a98eeba9-4220-4a45-9383-ca3970d3c877,},An
notations:map[string]string{io.kubernetes.container.hash: 7d87cb6c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8eb120962a5e1ae43987f7a568af72e628b559bef0b2e853cd05368eb06cb6,PodSandboxId:faefb957fdd8e61ecca5f9d9ed696cf5129e52557be6bbca65c73fba7a3871ca,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702422037372688550,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: c4c14be0-be04-41cc-a432-9bd05871708b,},Annotations:map[string]string{io.kubernetes.container.hash: 127cbd0e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dd6b9bfa01220451967d7148bcd5f792b977255924fd175001c133fa41947ac,PodSandboxId:8a796eea5e603630ddf572243faf379b878ec40f0e660f8218a3a2f859baec1b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1702422010353911681,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-68kz9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 1fd4f1da-11ad-4afe-87c3-8ffa5256a97a,},Annotations:map[string]string{io.kubernetes.container.hash: 4aaead07,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809da99345c5ab14f731cf90549e75f86d8cb0e4a541e1c7da3825c4748cdb62,PodSandboxId:8e153eaa800032049b936a4e692628b0cdd1b4b835f8f0a4315786003ddcd84f,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17024219
55473481551,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-889mp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 07578813-879d-4683-9946-ee13082762b5,},Annotations:map[string]string{io.kubernetes.container.hash: c7b4aaf8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d1792e9979dcbfa097e6e68325d866014e1b99cb647dbe52b8c0f9c809f31d,PodSandboxId:8b1563f11724b012b255ba91c4ff65189e0eaaffe5f787552c6c2c49eb2dad00,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702421926658891840,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8m62d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d4f02b64-2b77-4521-8367-a5dd110caf3f,},Annotations:map[string]string{io.kubernetes.container.hash: a628aea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49231e9bcbd9ddc260b2ba1a655f791f29fa580084246c05da35233bd5198ed0,PodSandboxId:6fa8970869d53aa5d26058f2e35ec31eb63c0d4c3d4bb15922c2035d65a4de26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702421867873619900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df257cb0-9230-401c-bfd9-e8d93b09c2dd,},Annotations:map[string]string{io.kubernetes.container.hash: 700cb1df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18c6aefcbf7a946913f8416e4abd9e3ad9e3ad1fa1fc894362bae202b90dba5,PodSandboxId:9f5222bbd7a69fd97b35969c3ba19fe545a5496b0d3c7652eee451d2b8e6d6ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,Sta
te:CONTAINER_RUNNING,CreatedAt:1702421863516433905,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cptn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d540fce8-3ea8-46bc-9484-1348f43f1f3b,},Annotations:map[string]string{io.kubernetes.container.hash: caa32064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c852c624dd6802330eee64044b4f7169c62410bb167b21a6756315c69d8f17a,PodSandboxId:5d8b12340b6fe385d362485a22fc617209d83249b20efc309a5a046f0070a5dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:17
02421857170859910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5p4zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3248804-9725-4b9d-8781-e2881ea46ca7,},Annotations:map[string]string{io.kubernetes.container.hash: 32dec3cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab15762cf6108afb923413dc4a5bd7e6f97bcb2c6e15920b3af46a5739c4f191,PodSandboxId:5ea255122397c47cdd6ce12a864eeea14b212fc82fbde60fb782b1a5ed48cd1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d3
5c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702421830899680239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09687aca731ec93fceebeb4b9ffb4a5a,},Annotations:map[string]string{io.kubernetes.container.hash: e00fdaf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ad068a00e56b6a71280d508465492433c1a177edb6f45b00e120f327ee28a6,PodSandboxId:3f223a83c9df711935a63a32a27b28a70c2c330f81c0bc19c4b3ef44adb157ad,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},}
,ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702421830947239558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03be545ee0d1f351ad0ecebdf06a7726,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18e7e6483633af8c28d948b0c0438d1918dda5bdc68622484d15b9ea8ba73b2,PodSandboxId:b33c4a254ab57f3a51c4e73723613c326443f44668cd6fff20f35407baa4c0fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:regis
try.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702421830880809558,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f0f1be916aeb3ef69080697700ddd03,},Annotations:map[string]string{io.kubernetes.container.hash: 423ed332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962d3be564edfd79775ff8e7bb1f3dd02a1365bc397290ff36d80ee28ff28d08,PodSandboxId:82636936e765cf412ccbf01d841b9b54a9652a1e8f25e7d23ea718cff096e799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8
s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702421830548348446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb5d9a7e3bcfc3549d819627df4f24bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a21a752d-388e-455c-bdb1-ede0485bb8c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.732119430Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4f75b669-66a5-4897-a0ad-2b0c26e4f2ff name=/runtime.v1.RuntimeService/Version
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.732179554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4f75b669-66a5-4897-a0ad-2b0c26e4f2ff name=/runtime.v1.RuntimeService/Version
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.734225922Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=053091f0-d408-42d3-af3b-b3334745a617 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.735591451Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702422186735567747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543773,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=053091f0-d408-42d3-af3b-b3334745a617 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.736252780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6b5cb2bd-7fce-45cd-9551-93f3710c24e1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.736305640Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6b5cb2bd-7fce-45cd-9551-93f3710c24e1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.736647587Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f973a2435ac22a637e141edf896a62c9d462ac1dcd2bdc8b2d7b4f08b610215,PodSandboxId:5ef0ff1a8c56c6f70cd831ab3fd87fc2b214d1378fec10f4903d591d4f5f0c2d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702422179520846580,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-znngs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a47983c5-b656-4144-9afc-8813e008ff8e,},Annotations:map[string]string{io.kubernetes.container.hash: babf058c,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81957d9e0d8dc2e26fd3f5f6b0e75ae047a689a285da396cb2ab55fa522e5a63,PodSandboxId:d024bfcb7af46904d95a8635429f52543bbd2786b8eebfff9c1c1fce61abe1ce,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1702422055156091765,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-2dshq,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: a98eeba9-4220-4a45-9383-ca3970d3c877,},An
notations:map[string]string{io.kubernetes.container.hash: 7d87cb6c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8eb120962a5e1ae43987f7a568af72e628b559bef0b2e853cd05368eb06cb6,PodSandboxId:faefb957fdd8e61ecca5f9d9ed696cf5129e52557be6bbca65c73fba7a3871ca,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702422037372688550,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: c4c14be0-be04-41cc-a432-9bd05871708b,},Annotations:map[string]string{io.kubernetes.container.hash: 127cbd0e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dd6b9bfa01220451967d7148bcd5f792b977255924fd175001c133fa41947ac,PodSandboxId:8a796eea5e603630ddf572243faf379b878ec40f0e660f8218a3a2f859baec1b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1702422010353911681,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-68kz9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 1fd4f1da-11ad-4afe-87c3-8ffa5256a97a,},Annotations:map[string]string{io.kubernetes.container.hash: 4aaead07,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809da99345c5ab14f731cf90549e75f86d8cb0e4a541e1c7da3825c4748cdb62,PodSandboxId:8e153eaa800032049b936a4e692628b0cdd1b4b835f8f0a4315786003ddcd84f,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17024219
55473481551,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-889mp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 07578813-879d-4683-9946-ee13082762b5,},Annotations:map[string]string{io.kubernetes.container.hash: c7b4aaf8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d1792e9979dcbfa097e6e68325d866014e1b99cb647dbe52b8c0f9c809f31d,PodSandboxId:8b1563f11724b012b255ba91c4ff65189e0eaaffe5f787552c6c2c49eb2dad00,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702421926658891840,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8m62d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d4f02b64-2b77-4521-8367-a5dd110caf3f,},Annotations:map[string]string{io.kubernetes.container.hash: a628aea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49231e9bcbd9ddc260b2ba1a655f791f29fa580084246c05da35233bd5198ed0,PodSandboxId:6fa8970869d53aa5d26058f2e35ec31eb63c0d4c3d4bb15922c2035d65a4de26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702421867873619900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df257cb0-9230-401c-bfd9-e8d93b09c2dd,},Annotations:map[string]string{io.kubernetes.container.hash: 700cb1df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18c6aefcbf7a946913f8416e4abd9e3ad9e3ad1fa1fc894362bae202b90dba5,PodSandboxId:9f5222bbd7a69fd97b35969c3ba19fe545a5496b0d3c7652eee451d2b8e6d6ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,Sta
te:CONTAINER_RUNNING,CreatedAt:1702421863516433905,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cptn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d540fce8-3ea8-46bc-9484-1348f43f1f3b,},Annotations:map[string]string{io.kubernetes.container.hash: caa32064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c852c624dd6802330eee64044b4f7169c62410bb167b21a6756315c69d8f17a,PodSandboxId:5d8b12340b6fe385d362485a22fc617209d83249b20efc309a5a046f0070a5dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:17
02421857170859910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5p4zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3248804-9725-4b9d-8781-e2881ea46ca7,},Annotations:map[string]string{io.kubernetes.container.hash: 32dec3cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab15762cf6108afb923413dc4a5bd7e6f97bcb2c6e15920b3af46a5739c4f191,PodSandboxId:5ea255122397c47cdd6ce12a864eeea14b212fc82fbde60fb782b1a5ed48cd1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d3
5c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702421830899680239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09687aca731ec93fceebeb4b9ffb4a5a,},Annotations:map[string]string{io.kubernetes.container.hash: e00fdaf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ad068a00e56b6a71280d508465492433c1a177edb6f45b00e120f327ee28a6,PodSandboxId:3f223a83c9df711935a63a32a27b28a70c2c330f81c0bc19c4b3ef44adb157ad,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},}
,ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702421830947239558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03be545ee0d1f351ad0ecebdf06a7726,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18e7e6483633af8c28d948b0c0438d1918dda5bdc68622484d15b9ea8ba73b2,PodSandboxId:b33c4a254ab57f3a51c4e73723613c326443f44668cd6fff20f35407baa4c0fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:regis
try.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702421830880809558,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f0f1be916aeb3ef69080697700ddd03,},Annotations:map[string]string{io.kubernetes.container.hash: 423ed332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962d3be564edfd79775ff8e7bb1f3dd02a1365bc397290ff36d80ee28ff28d08,PodSandboxId:82636936e765cf412ccbf01d841b9b54a9652a1e8f25e7d23ea718cff096e799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8
s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702421830548348446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb5d9a7e3bcfc3549d819627df4f24bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6b5cb2bd-7fce-45cd-9551-93f3710c24e1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.772962987Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=19fc034c-2e9d-4bef-9cad-d9cc11449c27 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.773044649Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=19fc034c-2e9d-4bef-9cad-d9cc11449c27 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.774258515Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4d9f6a59-d6eb-4aa7-bfac-465b0532d09a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.775662847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702422186775646366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543773,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=4d9f6a59-d6eb-4aa7-bfac-465b0532d09a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.776247396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=da499f01-49a6-4998-b042-db3a890d99a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.776306005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=da499f01-49a6-4998-b042-db3a890d99a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.776675043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f973a2435ac22a637e141edf896a62c9d462ac1dcd2bdc8b2d7b4f08b610215,PodSandboxId:5ef0ff1a8c56c6f70cd831ab3fd87fc2b214d1378fec10f4903d591d4f5f0c2d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702422179520846580,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-znngs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a47983c5-b656-4144-9afc-8813e008ff8e,},Annotations:map[string]string{io.kubernetes.container.hash: babf058c,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81957d9e0d8dc2e26fd3f5f6b0e75ae047a689a285da396cb2ab55fa522e5a63,PodSandboxId:d024bfcb7af46904d95a8635429f52543bbd2786b8eebfff9c1c1fce61abe1ce,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1702422055156091765,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-2dshq,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: a98eeba9-4220-4a45-9383-ca3970d3c877,},An
notations:map[string]string{io.kubernetes.container.hash: 7d87cb6c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8eb120962a5e1ae43987f7a568af72e628b559bef0b2e853cd05368eb06cb6,PodSandboxId:faefb957fdd8e61ecca5f9d9ed696cf5129e52557be6bbca65c73fba7a3871ca,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702422037372688550,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: c4c14be0-be04-41cc-a432-9bd05871708b,},Annotations:map[string]string{io.kubernetes.container.hash: 127cbd0e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dd6b9bfa01220451967d7148bcd5f792b977255924fd175001c133fa41947ac,PodSandboxId:8a796eea5e603630ddf572243faf379b878ec40f0e660f8218a3a2f859baec1b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1702422010353911681,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-68kz9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 1fd4f1da-11ad-4afe-87c3-8ffa5256a97a,},Annotations:map[string]string{io.kubernetes.container.hash: 4aaead07,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809da99345c5ab14f731cf90549e75f86d8cb0e4a541e1c7da3825c4748cdb62,PodSandboxId:8e153eaa800032049b936a4e692628b0cdd1b4b835f8f0a4315786003ddcd84f,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17024219
55473481551,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-889mp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 07578813-879d-4683-9946-ee13082762b5,},Annotations:map[string]string{io.kubernetes.container.hash: c7b4aaf8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d1792e9979dcbfa097e6e68325d866014e1b99cb647dbe52b8c0f9c809f31d,PodSandboxId:8b1563f11724b012b255ba91c4ff65189e0eaaffe5f787552c6c2c49eb2dad00,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702421926658891840,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8m62d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d4f02b64-2b77-4521-8367-a5dd110caf3f,},Annotations:map[string]string{io.kubernetes.container.hash: a628aea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49231e9bcbd9ddc260b2ba1a655f791f29fa580084246c05da35233bd5198ed0,PodSandboxId:6fa8970869d53aa5d26058f2e35ec31eb63c0d4c3d4bb15922c2035d65a4de26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702421867873619900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df257cb0-9230-401c-bfd9-e8d93b09c2dd,},Annotations:map[string]string{io.kubernetes.container.hash: 700cb1df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18c6aefcbf7a946913f8416e4abd9e3ad9e3ad1fa1fc894362bae202b90dba5,PodSandboxId:9f5222bbd7a69fd97b35969c3ba19fe545a5496b0d3c7652eee451d2b8e6d6ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,Sta
te:CONTAINER_RUNNING,CreatedAt:1702421863516433905,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cptn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d540fce8-3ea8-46bc-9484-1348f43f1f3b,},Annotations:map[string]string{io.kubernetes.container.hash: caa32064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c852c624dd6802330eee64044b4f7169c62410bb167b21a6756315c69d8f17a,PodSandboxId:5d8b12340b6fe385d362485a22fc617209d83249b20efc309a5a046f0070a5dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:17
02421857170859910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5p4zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3248804-9725-4b9d-8781-e2881ea46ca7,},Annotations:map[string]string{io.kubernetes.container.hash: 32dec3cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab15762cf6108afb923413dc4a5bd7e6f97bcb2c6e15920b3af46a5739c4f191,PodSandboxId:5ea255122397c47cdd6ce12a864eeea14b212fc82fbde60fb782b1a5ed48cd1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d3
5c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702421830899680239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09687aca731ec93fceebeb4b9ffb4a5a,},Annotations:map[string]string{io.kubernetes.container.hash: e00fdaf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ad068a00e56b6a71280d508465492433c1a177edb6f45b00e120f327ee28a6,PodSandboxId:3f223a83c9df711935a63a32a27b28a70c2c330f81c0bc19c4b3ef44adb157ad,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},}
,ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702421830947239558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03be545ee0d1f351ad0ecebdf06a7726,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18e7e6483633af8c28d948b0c0438d1918dda5bdc68622484d15b9ea8ba73b2,PodSandboxId:b33c4a254ab57f3a51c4e73723613c326443f44668cd6fff20f35407baa4c0fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:regis
try.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702421830880809558,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f0f1be916aeb3ef69080697700ddd03,},Annotations:map[string]string{io.kubernetes.container.hash: 423ed332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962d3be564edfd79775ff8e7bb1f3dd02a1365bc397290ff36d80ee28ff28d08,PodSandboxId:82636936e765cf412ccbf01d841b9b54a9652a1e8f25e7d23ea718cff096e799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8
s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702421830548348446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb5d9a7e3bcfc3549d819627df4f24bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=da499f01-49a6-4998-b042-db3a890d99a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.814014842Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=627507f4-be70-44f0-8f10-5fe059270f56 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.814072615Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=627507f4-be70-44f0-8f10-5fe059270f56 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.815447722Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f06d7121-6e7b-41ae-85ff-a3b1cd01bf10 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.816671426Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702422186816655826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543773,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=f06d7121-6e7b-41ae-85ff-a3b1cd01bf10 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.817505391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9fad6ad5-8a18-49b5-bede-b0c2bb83bb47 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.817564196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9fad6ad5-8a18-49b5-bede-b0c2bb83bb47 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:03:06 addons-577685 crio[719]: time="2023-12-12 23:03:06.817840163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f973a2435ac22a637e141edf896a62c9d462ac1dcd2bdc8b2d7b4f08b610215,PodSandboxId:5ef0ff1a8c56c6f70cd831ab3fd87fc2b214d1378fec10f4903d591d4f5f0c2d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702422179520846580,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-znngs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a47983c5-b656-4144-9afc-8813e008ff8e,},Annotations:map[string]string{io.kubernetes.container.hash: babf058c,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81957d9e0d8dc2e26fd3f5f6b0e75ae047a689a285da396cb2ab55fa522e5a63,PodSandboxId:d024bfcb7af46904d95a8635429f52543bbd2786b8eebfff9c1c1fce61abe1ce,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1702422055156091765,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-2dshq,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: a98eeba9-4220-4a45-9383-ca3970d3c877,},An
notations:map[string]string{io.kubernetes.container.hash: 7d87cb6c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8eb120962a5e1ae43987f7a568af72e628b559bef0b2e853cd05368eb06cb6,PodSandboxId:faefb957fdd8e61ecca5f9d9ed696cf5129e52557be6bbca65c73fba7a3871ca,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702422037372688550,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: c4c14be0-be04-41cc-a432-9bd05871708b,},Annotations:map[string]string{io.kubernetes.container.hash: 127cbd0e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dd6b9bfa01220451967d7148bcd5f792b977255924fd175001c133fa41947ac,PodSandboxId:8a796eea5e603630ddf572243faf379b878ec40f0e660f8218a3a2f859baec1b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1702422010353911681,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-68kz9,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 1fd4f1da-11ad-4afe-87c3-8ffa5256a97a,},Annotations:map[string]string{io.kubernetes.container.hash: 4aaead07,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809da99345c5ab14f731cf90549e75f86d8cb0e4a541e1c7da3825c4748cdb62,PodSandboxId:8e153eaa800032049b936a4e692628b0cdd1b4b835f8f0a4315786003ddcd84f,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17024219
55473481551,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-889mp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 07578813-879d-4683-9946-ee13082762b5,},Annotations:map[string]string{io.kubernetes.container.hash: c7b4aaf8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d1792e9979dcbfa097e6e68325d866014e1b99cb647dbe52b8c0f9c809f31d,PodSandboxId:8b1563f11724b012b255ba91c4ff65189e0eaaffe5f787552c6c2c49eb2dad00,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702421926658891840,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8m62d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d4f02b64-2b77-4521-8367-a5dd110caf3f,},Annotations:map[string]string{io.kubernetes.container.hash: a628aea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49231e9bcbd9ddc260b2ba1a655f791f29fa580084246c05da35233bd5198ed0,PodSandboxId:6fa8970869d53aa5d26058f2e35ec31eb63c0d4c3d4bb15922c2035d65a4de26,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702421867873619900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df257cb0-9230-401c-bfd9-e8d93b09c2dd,},Annotations:map[string]string{io.kubernetes.container.hash: 700cb1df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18c6aefcbf7a946913f8416e4abd9e3ad9e3ad1fa1fc894362bae202b90dba5,PodSandboxId:9f5222bbd7a69fd97b35969c3ba19fe545a5496b0d3c7652eee451d2b8e6d6ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,Sta
te:CONTAINER_RUNNING,CreatedAt:1702421863516433905,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cptn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d540fce8-3ea8-46bc-9484-1348f43f1f3b,},Annotations:map[string]string{io.kubernetes.container.hash: caa32064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c852c624dd6802330eee64044b4f7169c62410bb167b21a6756315c69d8f17a,PodSandboxId:5d8b12340b6fe385d362485a22fc617209d83249b20efc309a5a046f0070a5dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:17
02421857170859910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5p4zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3248804-9725-4b9d-8781-e2881ea46ca7,},Annotations:map[string]string{io.kubernetes.container.hash: 32dec3cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab15762cf6108afb923413dc4a5bd7e6f97bcb2c6e15920b3af46a5739c4f191,PodSandboxId:5ea255122397c47cdd6ce12a864eeea14b212fc82fbde60fb782b1a5ed48cd1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d3
5c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702421830899680239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09687aca731ec93fceebeb4b9ffb4a5a,},Annotations:map[string]string{io.kubernetes.container.hash: e00fdaf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ad068a00e56b6a71280d508465492433c1a177edb6f45b00e120f327ee28a6,PodSandboxId:3f223a83c9df711935a63a32a27b28a70c2c330f81c0bc19c4b3ef44adb157ad,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},}
,ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702421830947239558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03be545ee0d1f351ad0ecebdf06a7726,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18e7e6483633af8c28d948b0c0438d1918dda5bdc68622484d15b9ea8ba73b2,PodSandboxId:b33c4a254ab57f3a51c4e73723613c326443f44668cd6fff20f35407baa4c0fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:regis
try.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702421830880809558,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f0f1be916aeb3ef69080697700ddd03,},Annotations:map[string]string{io.kubernetes.container.hash: 423ed332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962d3be564edfd79775ff8e7bb1f3dd02a1365bc397290ff36d80ee28ff28d08,PodSandboxId:82636936e765cf412ccbf01d841b9b54a9652a1e8f25e7d23ea718cff096e799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8
s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702421830548348446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-577685,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb5d9a7e3bcfc3549d819627df4f24bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9fad6ad5-8a18-49b5-bede-b0c2bb83bb47 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2f973a2435ac2       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   5ef0ff1a8c56c       hello-world-app-5d77478584-znngs
	81957d9e0d8dc       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   d024bfcb7af46       headlamp-777fd4b855-2dshq
	4b8eb120962a5       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                              2 minutes ago       Running             nginx                     0                   faefb957fdd8e       nginx
	8dd6b9bfa0122       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   8a796eea5e603       gcp-auth-d4c87556c-68kz9
	809da99345c5a       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     3                   8e153eaa80003       ingress-nginx-admission-patch-889mp
	73d1792e9979d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              create                    0                   8b1563f11724b       ingress-nginx-admission-create-8m62d
	49231e9bcbd9d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   6fa8970869d53       storage-provisioner
	c18c6aefcbf7a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             5 minutes ago       Running             kube-proxy                0                   9f5222bbd7a69       kube-proxy-2cptn
	7c852c624dd68       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             5 minutes ago       Running             coredns                   0                   5d8b12340b6fe       coredns-5dd5756b68-5p4zl
	58ad068a00e56       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             5 minutes ago       Running             kube-scheduler            0                   3f223a83c9df7       kube-scheduler-addons-577685
	ab15762cf6108       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             5 minutes ago       Running             etcd                      0                   5ea255122397c       etcd-addons-577685
	f18e7e6483633       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             5 minutes ago       Running             kube-apiserver            0                   b33c4a254ab57       kube-apiserver-addons-577685
	962d3be564edf       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             5 minutes ago       Running             kube-controller-manager   0                   82636936e765c       kube-controller-manager-addons-577685
	
	* 
	* ==> coredns [7c852c624dd6802330eee64044b4f7169c62410bb167b21a6756315c69d8f17a] <==
	* [INFO] 10.244.0.7:53610 - 47840 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00017675s
	[INFO] 10.244.0.7:50900 - 53396 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000143707s
	[INFO] 10.244.0.7:50900 - 48274 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062687s
	[INFO] 10.244.0.7:36976 - 28565 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000071956s
	[INFO] 10.244.0.7:36976 - 23447 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009829s
	[INFO] 10.244.0.7:36129 - 2430 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000191279s
	[INFO] 10.244.0.7:36129 - 59771 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000098629s
	[INFO] 10.244.0.7:41170 - 15037 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104228s
	[INFO] 10.244.0.7:41170 - 37048 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000087056s
	[INFO] 10.244.0.7:49625 - 2433 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000097132s
	[INFO] 10.244.0.7:49625 - 46979 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000029471s
	[INFO] 10.244.0.7:47489 - 64532 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063159s
	[INFO] 10.244.0.7:47489 - 52490 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000121588s
	[INFO] 10.244.0.7:45267 - 3806 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000061815s
	[INFO] 10.244.0.7:45267 - 32989 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000036569s
	[INFO] 10.244.0.21:47418 - 41150 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000793713s
	[INFO] 10.244.0.21:34947 - 31291 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000108455s
	[INFO] 10.244.0.21:49403 - 47459 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00019437s
	[INFO] 10.244.0.21:57079 - 6568 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000093359s
	[INFO] 10.244.0.21:53469 - 53282 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000192984s
	[INFO] 10.244.0.21:54130 - 30985 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000073616s
	[INFO] 10.244.0.21:42258 - 10783 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000770698s
	[INFO] 10.244.0.21:53541 - 56358 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.000513621s
	[INFO] 10.244.0.24:36227 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000398421s
	[INFO] 10.244.0.24:39734 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00033527s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-577685
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-577685
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=addons-577685
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T22_57_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-577685
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:57:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-577685
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:03:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:01:23 +0000   Tue, 12 Dec 2023 22:57:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:01:23 +0000   Tue, 12 Dec 2023 22:57:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:01:23 +0000   Tue, 12 Dec 2023 22:57:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:01:23 +0000   Tue, 12 Dec 2023 22:57:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    addons-577685
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 dbf7d39d65af46649f2de2c50cf21275
	  System UUID:                dbf7d39d-65af-4664-9f2d-e2c50cf21275
	  Boot ID:                    86920755-5e7e-4691-9b8e-e1f3ed496b89
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-znngs         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  gcp-auth                    gcp-auth-d4c87556c-68kz9                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  headlamp                    headlamp-777fd4b855-2dshq                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 coredns-5dd5756b68-5p4zl                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m36s
	  kube-system                 etcd-addons-577685                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-apiserver-addons-577685             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 kube-controller-manager-addons-577685    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-proxy-2cptn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  kube-system                 kube-scheduler-addons-577685             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m20s  kube-proxy       
	  Normal  Starting                 5m49s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m49s  kubelet          Node addons-577685 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m49s  kubelet          Node addons-577685 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s  kubelet          Node addons-577685 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m49s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m49s  kubelet          Node addons-577685 status is now: NodeReady
	  Normal  RegisteredNode           5m38s  node-controller  Node addons-577685 event: Registered Node addons-577685 in Controller
	
	* 
	* ==> dmesg <==
	* [  +5.060377] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.216355] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.102434] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.138262] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.108308] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.200553] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[Dec12 22:57] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	[  +9.259400] systemd-fstab-generator[1254]: Ignoring "noauto" for root device
	[ +19.846869] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.211342] kauditd_printk_skb: 64 callbacks suppressed
	[Dec12 22:58] kauditd_printk_skb: 24 callbacks suppressed
	[ +23.410413] kauditd_printk_skb: 18 callbacks suppressed
	[ +19.699017] kauditd_printk_skb: 30 callbacks suppressed
	[Dec12 22:59] kauditd_printk_skb: 18 callbacks suppressed
	[ +42.046511] kauditd_printk_skb: 18 callbacks suppressed
	[Dec12 23:00] kauditd_printk_skb: 1 callbacks suppressed
	[ +17.248407] kauditd_printk_skb: 10 callbacks suppressed
	[ +10.210158] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.826534] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.391604] kauditd_printk_skb: 3 callbacks suppressed
	[  +9.352637] kauditd_printk_skb: 21 callbacks suppressed
	[Dec12 23:01] kauditd_printk_skb: 20 callbacks suppressed
	[Dec12 23:03] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [ab15762cf6108afb923413dc4a5bd7e6f97bcb2c6e15920b3af46a5739c4f191] <==
	* {"level":"warn","ts":"2023-12-12T23:00:10.23594Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T23:00:09.791561Z","time spent":"444.211876ms","remote":"127.0.0.1:59894","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1269 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-12-12T23:00:10.236137Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.829473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2023-12-12T23:00:10.23625Z","caller":"traceutil/trace.go:171","msg":"trace[1114481196] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1275; }","duration":"310.944518ms","start":"2023-12-12T23:00:09.925299Z","end":"2023-12-12T23:00:10.236243Z","steps":["trace[1114481196] 'agreement among raft nodes before linearized reading'  (duration: 310.767841ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T23:00:10.236302Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T23:00:09.925286Z","time spent":"311.009618ms","remote":"127.0.0.1:59916","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":576,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	{"level":"warn","ts":"2023-12-12T23:00:10.236445Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.864509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2023-12-12T23:00:10.23656Z","caller":"traceutil/trace.go:171","msg":"trace[1844694810] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1275; }","duration":"256.92652ms","start":"2023-12-12T23:00:09.979567Z","end":"2023-12-12T23:00:10.236494Z","steps":["trace[1844694810] 'agreement among raft nodes before linearized reading'  (duration: 256.779055ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:00:23.794019Z","caller":"traceutil/trace.go:171","msg":"trace[1899044035] transaction","detail":"{read_only:false; response_revision:1370; number_of_response:1; }","duration":"218.066708ms","start":"2023-12-12T23:00:23.575938Z","end":"2023-12-12T23:00:23.794004Z","steps":["trace[1899044035] 'process raft request'  (duration: 217.948567ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:00:29.14722Z","caller":"traceutil/trace.go:171","msg":"trace[876842220] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1418; }","duration":"114.901264ms","start":"2023-12-12T23:00:29.032305Z","end":"2023-12-12T23:00:29.147206Z","steps":["trace[876842220] 'process raft request'  (duration: 112.956092ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T23:00:51.704071Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.162052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"info","ts":"2023-12-12T23:00:51.704157Z","caller":"traceutil/trace.go:171","msg":"trace[948574526] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1655; }","duration":"156.26946ms","start":"2023-12-12T23:00:51.547876Z","end":"2023-12-12T23:00:51.704146Z","steps":["trace[948574526] 'range keys from in-memory index tree'  (duration: 156.074975ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T23:00:51.704491Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.306418ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2023-12-12T23:00:51.70457Z","caller":"traceutil/trace.go:171","msg":"trace[1565063184] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1655; }","duration":"180.390012ms","start":"2023-12-12T23:00:51.524171Z","end":"2023-12-12T23:00:51.704561Z","steps":["trace[1565063184] 'range keys from in-memory index tree'  (duration: 180.223984ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:00:54.807129Z","caller":"traceutil/trace.go:171","msg":"trace[614745879] linearizableReadLoop","detail":"{readStateIndex:1732; appliedIndex:1731; }","duration":"372.570481ms","start":"2023-12-12T23:00:54.434536Z","end":"2023-12-12T23:00:54.807106Z","steps":["trace[614745879] 'read index received'  (duration: 372.272384ms)","trace[614745879] 'applied index is now lower than readState.Index'  (duration: 297.573µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T23:00:54.807586Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"372.999943ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-12T23:00:54.807721Z","caller":"traceutil/trace.go:171","msg":"trace[815244393] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:1661; }","duration":"373.218579ms","start":"2023-12-12T23:00:54.434487Z","end":"2023-12-12T23:00:54.807706Z","steps":["trace[815244393] 'agreement among raft nodes before linearized reading'  (duration: 372.864024ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T23:00:54.807813Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T23:00:54.434474Z","time spent":"373.325587ms","remote":"127.0.0.1:59894","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":10,"response size":29,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true "}
	{"level":"info","ts":"2023-12-12T23:00:54.807725Z","caller":"traceutil/trace.go:171","msg":"trace[1523420028] transaction","detail":"{read_only:false; response_revision:1661; number_of_response:1; }","duration":"494.028835ms","start":"2023-12-12T23:00:54.313688Z","end":"2023-12-12T23:00:54.807717Z","steps":["trace[1523420028] 'process raft request'  (duration: 493.19298ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T23:00:54.808008Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T23:00:54.313664Z","time spent":"494.260161ms","remote":"127.0.0.1:59898","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4244,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-4645bbf6-7858-4980-ba0f-98b14aad17a1\" mod_revision:1643 > success:<request_put:<key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-4645bbf6-7858-4980-ba0f-98b14aad17a1\" value_size:4144 >> failure:<request_range:<key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-4645bbf6-7858-4980-ba0f-98b14aad17a1\" > >"}
	{"level":"warn","ts":"2023-12-12T23:00:54.807688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.66022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-12-12T23:00:54.808173Z","caller":"traceutil/trace.go:171","msg":"trace[790724141] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1661; }","duration":"113.149053ms","start":"2023-12-12T23:00:54.695015Z","end":"2023-12-12T23:00:54.808164Z","steps":["trace[790724141] 'agreement among raft nodes before linearized reading'  (duration: 112.644198ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T23:00:54.807607Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.815541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"info","ts":"2023-12-12T23:00:54.808282Z","caller":"traceutil/trace.go:171","msg":"trace[24434903] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1661; }","duration":"261.502158ms","start":"2023-12-12T23:00:54.546773Z","end":"2023-12-12T23:00:54.808275Z","steps":["trace[24434903] 'agreement among raft nodes before linearized reading'  (duration: 260.775358ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:01:27.171766Z","caller":"traceutil/trace.go:171","msg":"trace[1495189558] linearizableReadLoop","detail":"{readStateIndex:1928; appliedIndex:1927; }","duration":"158.57537ms","start":"2023-12-12T23:01:27.01317Z","end":"2023-12-12T23:01:27.171745Z","steps":["trace[1495189558] 'read index received'  (duration: 158.387097ms)","trace[1495189558] 'applied index is now lower than readState.Index'  (duration: 187.354µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T23:01:27.171944Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.769528ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-12-12T23:01:27.171967Z","caller":"traceutil/trace.go:171","msg":"trace[348968180] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1848; }","duration":"158.813389ms","start":"2023-12-12T23:01:27.013146Z","end":"2023-12-12T23:01:27.171959Z","steps":["trace[348968180] 'agreement among raft nodes before linearized reading'  (duration: 158.740939ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [8dd6b9bfa01220451967d7148bcd5f792b977255924fd175001c133fa41947ac] <==
	* 2023/12/12 23:00:10 GCP Auth Webhook started!
	2023/12/12 23:00:15 Ready to marshal response ...
	2023/12/12 23:00:15 Ready to write response ...
	2023/12/12 23:00:16 Ready to marshal response ...
	2023/12/12 23:00:16 Ready to write response ...
	2023/12/12 23:00:22 Ready to marshal response ...
	2023/12/12 23:00:22 Ready to write response ...
	2023/12/12 23:00:30 Ready to marshal response ...
	2023/12/12 23:00:30 Ready to write response ...
	2023/12/12 23:00:31 Ready to marshal response ...
	2023/12/12 23:00:31 Ready to write response ...
	2023/12/12 23:00:31 Ready to marshal response ...
	2023/12/12 23:00:31 Ready to write response ...
	2023/12/12 23:00:45 Ready to marshal response ...
	2023/12/12 23:00:45 Ready to write response ...
	2023/12/12 23:00:47 Ready to marshal response ...
	2023/12/12 23:00:47 Ready to write response ...
	2023/12/12 23:00:47 Ready to marshal response ...
	2023/12/12 23:00:47 Ready to write response ...
	2023/12/12 23:00:47 Ready to marshal response ...
	2023/12/12 23:00:47 Ready to write response ...
	2023/12/12 23:00:50 Ready to marshal response ...
	2023/12/12 23:00:50 Ready to write response ...
	2023/12/12 23:02:56 Ready to marshal response ...
	2023/12/12 23:02:56 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:03:07 up 6 min,  0 users,  load average: 0.35, 1.30, 0.77
	Linux addons-577685 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f18e7e6483633af8c28d948b0c0438d1918dda5bdc68622484d15b9ea8ba73b2] <==
	* Trace[1960939987]: ["GuaranteedUpdate etcd3" audit-id:3da2945d-8113-470b-b3fa-fa23756d61cd,key:/pods/local-path-storage/helper-pod-delete-pvc-4645bbf6-7858-4980-ba0f-98b14aad17a1,type:*core.Pod,resource:pods 511ms (23:00:54.302)
	Trace[1960939987]:  ---"Txn call completed" 497ms (23:00:54.810)]
	Trace[1960939987]: ---"Object stored in database" 502ms (23:00:54.810)
	Trace[1960939987]: [511.492064ms] [511.492064ms] END
	I1212 23:01:05.179030       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 23:01:05.179115       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 23:01:05.198046       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 23:01:05.198323       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 23:01:05.211980       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 23:01:05.212092       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 23:01:05.226030       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 23:01:05.226144       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 23:01:05.239806       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 23:01:05.239870       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 23:01:05.240038       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 23:01:05.240094       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 23:01:05.260252       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 23:01:05.260334       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 23:01:05.269021       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 23:01:05.269090       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1212 23:01:06.241058       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1212 23:01:06.269134       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E1212 23:01:06.286783       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	W1212 23:01:06.296753       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1212 23:02:56.296795       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.8.252"}
	
	* 
	* ==> kube-controller-manager [962d3be564edfd79775ff8e7bb1f3dd02a1365bc397290ff36d80ee28ff28d08] <==
	* W1212 23:01:48.230675       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 23:01:48.230736       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 23:01:50.464196       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 23:01:50.464221       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 23:02:16.627671       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 23:02:16.627877       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 23:02:23.566828       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 23:02:23.566862       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 23:02:31.129888       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 23:02:31.129938       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 23:02:38.815335       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 23:02:38.815533       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 23:02:56.039364       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1212 23:02:56.096758       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-znngs"
	I1212 23:02:56.114976       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="77.312496ms"
	I1212 23:02:56.145243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="29.996812ms"
	I1212 23:02:56.145487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="124.291µs"
	I1212 23:02:56.161676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="71.52µs"
	I1212 23:02:58.730553       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1212 23:02:58.737992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="4.207µs"
	I1212 23:02:58.749231       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1212 23:03:00.026258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="8.675011ms"
	I1212 23:03:00.026474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="143.08µs"
	W1212 23:03:06.349219       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 23:03:06.349303       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [c18c6aefcbf7a946913f8416e4abd9e3ad9e3ad1fa1fc894362bae202b90dba5] <==
	* I1212 22:57:45.329480       1 server_others.go:69] "Using iptables proxy"
	I1212 22:57:45.472009       1 node.go:141] Successfully retrieved node IP: 192.168.39.136
	I1212 22:57:46.341614       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 22:57:46.341662       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 22:57:46.371691       1 server_others.go:152] "Using iptables Proxier"
	I1212 22:57:46.372534       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 22:57:46.372769       1 server.go:846] "Version info" version="v1.28.4"
	I1212 22:57:46.372857       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 22:57:46.381237       1 config.go:188] "Starting service config controller"
	I1212 22:57:46.381286       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 22:57:46.381310       1 config.go:97] "Starting endpoint slice config controller"
	I1212 22:57:46.381314       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 22:57:46.389744       1 config.go:315] "Starting node config controller"
	I1212 22:57:46.389830       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 22:57:46.485466       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 22:57:46.485544       1 shared_informer.go:318] Caches are synced for service config
	I1212 22:57:46.489927       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [58ad068a00e56b6a71280d508465492433c1a177edb6f45b00e120f327ee28a6] <==
	* W1212 22:57:15.940240       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 22:57:15.940442       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 22:57:16.015022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 22:57:16.015104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 22:57:16.172616       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 22:57:16.172669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 22:57:16.182982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 22:57:16.183104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 22:57:16.237841       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 22:57:16.238088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 22:57:16.310102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 22:57:16.310239       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 22:57:16.319069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 22:57:16.319187       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 22:57:16.347864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 22:57:16.347952       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 22:57:16.361574       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 22:57:16.361670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 22:57:16.370464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 22:57:16.370568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 22:57:16.453932       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 22:57:16.453991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 22:57:16.469166       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 22:57:16.469262       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 22:57:18.619752       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 22:56:46 UTC, ends at Tue 2023-12-12 23:03:07 UTC. --
	Dec 12 23:02:56 addons-577685 kubelet[1261]: I1212 23:02:56.110557    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="053b6d5b-92b3-4722-b488-599e03d4f1f5" containerName="task-pv-container"
	Dec 12 23:02:56 addons-577685 kubelet[1261]: I1212 23:02:56.110563    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="d7465fca-56ba-4835-a871-00f68bc478b9" containerName="volume-snapshot-controller"
	Dec 12 23:02:56 addons-577685 kubelet[1261]: I1212 23:02:56.229996    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t4j7\" (UniqueName: \"kubernetes.io/projected/a47983c5-b656-4144-9afc-8813e008ff8e-kube-api-access-2t4j7\") pod \"hello-world-app-5d77478584-znngs\" (UID: \"a47983c5-b656-4144-9afc-8813e008ff8e\") " pod="default/hello-world-app-5d77478584-znngs"
	Dec 12 23:02:56 addons-577685 kubelet[1261]: I1212 23:02:56.230074    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a47983c5-b656-4144-9afc-8813e008ff8e-gcp-creds\") pod \"hello-world-app-5d77478584-znngs\" (UID: \"a47983c5-b656-4144-9afc-8813e008ff8e\") " pod="default/hello-world-app-5d77478584-znngs"
	Dec 12 23:02:57 addons-577685 kubelet[1261]: I1212 23:02:57.540351    1261 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlkng\" (UniqueName: \"kubernetes.io/projected/51a0e5db-f928-4b3f-acef-f9d813ba4965-kube-api-access-hlkng\") pod \"51a0e5db-f928-4b3f-acef-f9d813ba4965\" (UID: \"51a0e5db-f928-4b3f-acef-f9d813ba4965\") "
	Dec 12 23:02:57 addons-577685 kubelet[1261]: I1212 23:02:57.542957    1261 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51a0e5db-f928-4b3f-acef-f9d813ba4965-kube-api-access-hlkng" (OuterVolumeSpecName: "kube-api-access-hlkng") pod "51a0e5db-f928-4b3f-acef-f9d813ba4965" (UID: "51a0e5db-f928-4b3f-acef-f9d813ba4965"). InnerVolumeSpecName "kube-api-access-hlkng". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 23:02:57 addons-577685 kubelet[1261]: I1212 23:02:57.641306    1261 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hlkng\" (UniqueName: \"kubernetes.io/projected/51a0e5db-f928-4b3f-acef-f9d813ba4965-kube-api-access-hlkng\") on node \"addons-577685\" DevicePath \"\""
	Dec 12 23:02:57 addons-577685 kubelet[1261]: I1212 23:02:57.989437    1261 scope.go:117] "RemoveContainer" containerID="fd981bfd66de9e86b17535b9b52c9a6d375e27d557a88ab7b1e7492daa870c3d"
	Dec 12 23:02:58 addons-577685 kubelet[1261]: I1212 23:02:58.032725    1261 scope.go:117] "RemoveContainer" containerID="fd981bfd66de9e86b17535b9b52c9a6d375e27d557a88ab7b1e7492daa870c3d"
	Dec 12 23:02:58 addons-577685 kubelet[1261]: E1212 23:02:58.033337    1261 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd981bfd66de9e86b17535b9b52c9a6d375e27d557a88ab7b1e7492daa870c3d\": container with ID starting with fd981bfd66de9e86b17535b9b52c9a6d375e27d557a88ab7b1e7492daa870c3d not found: ID does not exist" containerID="fd981bfd66de9e86b17535b9b52c9a6d375e27d557a88ab7b1e7492daa870c3d"
	Dec 12 23:02:58 addons-577685 kubelet[1261]: I1212 23:02:58.033463    1261 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd981bfd66de9e86b17535b9b52c9a6d375e27d557a88ab7b1e7492daa870c3d"} err="failed to get container status \"fd981bfd66de9e86b17535b9b52c9a6d375e27d557a88ab7b1e7492daa870c3d\": rpc error: code = NotFound desc = could not find container \"fd981bfd66de9e86b17535b9b52c9a6d375e27d557a88ab7b1e7492daa870c3d\": container with ID starting with fd981bfd66de9e86b17535b9b52c9a6d375e27d557a88ab7b1e7492daa870c3d not found: ID does not exist"
	Dec 12 23:02:58 addons-577685 kubelet[1261]: I1212 23:02:58.431762    1261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="51a0e5db-f928-4b3f-acef-f9d813ba4965" path="/var/lib/kubelet/pods/51a0e5db-f928-4b3f-acef-f9d813ba4965/volumes"
	Dec 12 23:03:00 addons-577685 kubelet[1261]: I1212 23:03:00.432546    1261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="07578813-879d-4683-9946-ee13082762b5" path="/var/lib/kubelet/pods/07578813-879d-4683-9946-ee13082762b5/volumes"
	Dec 12 23:03:00 addons-577685 kubelet[1261]: I1212 23:03:00.433034    1261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d4f02b64-2b77-4521-8367-a5dd110caf3f" path="/var/lib/kubelet/pods/d4f02b64-2b77-4521-8367-a5dd110caf3f/volumes"
	Dec 12 23:03:02 addons-577685 kubelet[1261]: I1212 23:03:02.019324    1261 scope.go:117] "RemoveContainer" containerID="b11f38239ed1294dd112b2ffb73b317977264e62341de8727900dda270f4a85d"
	Dec 12 23:03:02 addons-577685 kubelet[1261]: I1212 23:03:02.061907    1261 scope.go:117] "RemoveContainer" containerID="b11f38239ed1294dd112b2ffb73b317977264e62341de8727900dda270f4a85d"
	Dec 12 23:03:02 addons-577685 kubelet[1261]: E1212 23:03:02.062868    1261 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b11f38239ed1294dd112b2ffb73b317977264e62341de8727900dda270f4a85d\": container with ID starting with b11f38239ed1294dd112b2ffb73b317977264e62341de8727900dda270f4a85d not found: ID does not exist" containerID="b11f38239ed1294dd112b2ffb73b317977264e62341de8727900dda270f4a85d"
	Dec 12 23:03:02 addons-577685 kubelet[1261]: I1212 23:03:02.063054    1261 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b11f38239ed1294dd112b2ffb73b317977264e62341de8727900dda270f4a85d"} err="failed to get container status \"b11f38239ed1294dd112b2ffb73b317977264e62341de8727900dda270f4a85d\": rpc error: code = NotFound desc = could not find container \"b11f38239ed1294dd112b2ffb73b317977264e62341de8727900dda270f4a85d\": container with ID starting with b11f38239ed1294dd112b2ffb73b317977264e62341de8727900dda270f4a85d not found: ID does not exist"
	Dec 12 23:03:02 addons-577685 kubelet[1261]: I1212 23:03:02.077037    1261 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f063611-cb76-44ba-8497-e83a8e6c7f74-webhook-cert\") pod \"2f063611-cb76-44ba-8497-e83a8e6c7f74\" (UID: \"2f063611-cb76-44ba-8497-e83a8e6c7f74\") "
	Dec 12 23:03:02 addons-577685 kubelet[1261]: I1212 23:03:02.077099    1261 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4897f\" (UniqueName: \"kubernetes.io/projected/2f063611-cb76-44ba-8497-e83a8e6c7f74-kube-api-access-4897f\") pod \"2f063611-cb76-44ba-8497-e83a8e6c7f74\" (UID: \"2f063611-cb76-44ba-8497-e83a8e6c7f74\") "
	Dec 12 23:03:02 addons-577685 kubelet[1261]: I1212 23:03:02.081209    1261 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f063611-cb76-44ba-8497-e83a8e6c7f74-kube-api-access-4897f" (OuterVolumeSpecName: "kube-api-access-4897f") pod "2f063611-cb76-44ba-8497-e83a8e6c7f74" (UID: "2f063611-cb76-44ba-8497-e83a8e6c7f74"). InnerVolumeSpecName "kube-api-access-4897f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 23:03:02 addons-577685 kubelet[1261]: I1212 23:03:02.095209    1261 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f063611-cb76-44ba-8497-e83a8e6c7f74-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2f063611-cb76-44ba-8497-e83a8e6c7f74" (UID: "2f063611-cb76-44ba-8497-e83a8e6c7f74"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 23:03:02 addons-577685 kubelet[1261]: I1212 23:03:02.177568    1261 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f063611-cb76-44ba-8497-e83a8e6c7f74-webhook-cert\") on node \"addons-577685\" DevicePath \"\""
	Dec 12 23:03:02 addons-577685 kubelet[1261]: I1212 23:03:02.177634    1261 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4897f\" (UniqueName: \"kubernetes.io/projected/2f063611-cb76-44ba-8497-e83a8e6c7f74-kube-api-access-4897f\") on node \"addons-577685\" DevicePath \"\""
	Dec 12 23:03:02 addons-577685 kubelet[1261]: I1212 23:03:02.434551    1261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2f063611-cb76-44ba-8497-e83a8e6c7f74" path="/var/lib/kubelet/pods/2f063611-cb76-44ba-8497-e83a8e6c7f74/volumes"
	
	* 
	* ==> storage-provisioner [49231e9bcbd9ddc260b2ba1a655f791f29fa580084246c05da35233bd5198ed0] <==
	* I1212 22:57:49.378976       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 22:57:49.450571       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 22:57:49.450747       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 22:57:49.462809       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 22:57:49.464864       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-577685_07066cfa-66be-4c7c-9add-7c5653d044ae!
	I1212 22:57:49.475329       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1223c027-a85c-4fc5-9d47-c6c6208db7aa", APIVersion:"v1", ResourceVersion:"820", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-577685_07066cfa-66be-4c7c-9add-7c5653d044ae became leader
	I1212 22:57:49.566475       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-577685_07066cfa-66be-4c7c-9add-7c5653d044ae!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-577685 -n addons-577685
helpers_test.go:261: (dbg) Run:  kubectl --context addons-577685 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (158.65s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.89s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-577685
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-577685: exit status 82 (2m1.090351989s)

                                                
                                                
-- stdout --
	* Stopping node "addons-577685"  ...
	* Stopping node "addons-577685"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-577685" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-577685
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-577685: exit status 11 (21.513954682s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.136:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-577685" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-577685
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-577685: exit status 11 (6.144192756s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.136:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-577685" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-577685
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-577685: exit status 11 (6.143885174s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.136:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-577685" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (170.88s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-401709 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1212 23:12:55.647951  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-401709 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.848471523s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-401709 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-401709 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [84cf89ad-bc4e-4b24-a63d-6e00b0c634f7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [84cf89ad-bc4e-4b24-a63d-6e00b0c634f7] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.015321635s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-401709 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1212 23:14:27.616659  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:14:27.621920  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:14:27.632171  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:14:27.652464  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:14:27.692787  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:14:27.773177  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:14:27.933636  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:14:28.254244  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:14:28.895191  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:14:30.175709  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:14:32.736641  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:14:37.857468  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:14:48.097990  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:15:08.578652  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:15:11.805081  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-401709 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.239660442s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-401709 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-401709 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.68
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-401709 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-401709 addons disable ingress-dns --alsologtostderr -v=1: (2.747913116s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-401709 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-401709 addons disable ingress --alsologtostderr -v=1: (7.92102178s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-401709 -n ingress-addon-legacy-401709
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-401709 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-401709 logs -n 25: (1.312568564s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-579382 image ls                                                | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	| image          | functional-579382 image save                                              | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-579382                  |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-579382 image rm                                                | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-579382                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-579382 image ls                                                | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	| image          | functional-579382 image load                                              | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-579382 image ls                                                | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	| image          | functional-579382 image save --daemon                                     | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-579382                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| update-context | functional-579382                                                         | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-579382                                                         | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-579382                                                         | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-579382                                                         | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-579382                                                         | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-579382 ssh pgrep                                               | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-579382 image build -t                                          | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	|                | localhost/my-image:functional-579382                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-579382                                                         | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-579382                                                         | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-579382 image ls                                                | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	| delete         | -p functional-579382                                                      | functional-579382           | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:10 UTC |
	| start          | -p ingress-addon-legacy-401709                                            | ingress-addon-legacy-401709 | jenkins | v1.32.0 | 12 Dec 23 23:10 UTC | 12 Dec 23 23:12 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-401709                                               | ingress-addon-legacy-401709 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:12 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-401709                                               | ingress-addon-legacy-401709 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:12 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-401709                                               | ingress-addon-legacy-401709 | jenkins | v1.32.0 | 12 Dec 23 23:13 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-401709 ip                                            | ingress-addon-legacy-401709 | jenkins | v1.32.0 | 12 Dec 23 23:15 UTC | 12 Dec 23 23:15 UTC |
	| addons         | ingress-addon-legacy-401709                                               | ingress-addon-legacy-401709 | jenkins | v1.32.0 | 12 Dec 23 23:15 UTC | 12 Dec 23 23:15 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-401709                                               | ingress-addon-legacy-401709 | jenkins | v1.32.0 | 12 Dec 23 23:15 UTC | 12 Dec 23 23:15 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:10:27
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:10:27.856941  152254 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:10:27.857090  152254 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:10:27.857100  152254 out.go:309] Setting ErrFile to fd 2...
	I1212 23:10:27.857107  152254 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:10:27.857300  152254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1212 23:10:27.857887  152254 out.go:303] Setting JSON to false
	I1212 23:10:27.858756  152254 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6776,"bootTime":1702415852,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:10:27.858822  152254 start.go:138] virtualization: kvm guest
	I1212 23:10:27.861098  152254 out.go:177] * [ingress-addon-legacy-401709] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:10:27.862557  152254 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 23:10:27.863976  152254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:10:27.862588  152254 notify.go:220] Checking for updates...
	I1212 23:10:27.867092  152254 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:10:27.868873  152254 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:10:27.870433  152254 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:10:27.871874  152254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:10:27.873382  152254 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:10:27.908484  152254 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 23:10:27.909899  152254 start.go:298] selected driver: kvm2
	I1212 23:10:27.909916  152254 start.go:902] validating driver "kvm2" against <nil>
	I1212 23:10:27.909924  152254 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:10:27.910665  152254 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:10:27.910729  152254 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:10:27.924601  152254 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:10:27.924669  152254 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 23:10:27.924866  152254 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:10:27.924932  152254 cni.go:84] Creating CNI manager for ""
	I1212 23:10:27.924944  152254 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:10:27.924955  152254 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 23:10:27.924964  152254 start_flags.go:323] config:
	{Name:ingress-addon-legacy-401709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-401709 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:10:27.925077  152254 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:10:27.926868  152254 out.go:177] * Starting control plane node ingress-addon-legacy-401709 in cluster ingress-addon-legacy-401709
	I1212 23:10:27.928198  152254 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 23:10:28.431335  152254 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1212 23:10:28.431366  152254 cache.go:56] Caching tarball of preloaded images
	I1212 23:10:28.431573  152254 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 23:10:28.433338  152254 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1212 23:10:28.434920  152254 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1212 23:10:28.985597  152254 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1212 23:10:42.651268  152254 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1212 23:10:42.651366  152254 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1212 23:10:43.627946  152254 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1212 23:10:43.628328  152254 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/config.json ...
	I1212 23:10:43.628367  152254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/config.json: {Name:mk8caa2d9b957ad185a7689df884424a514f0ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:10:43.628584  152254 start.go:365] acquiring machines lock for ingress-addon-legacy-401709: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:10:43.628628  152254 start.go:369] acquired machines lock for "ingress-addon-legacy-401709" in 22.049µs
	I1212 23:10:43.628653  152254 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-401709 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-401709 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:10:43.628767  152254 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 23:10:43.631217  152254 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1212 23:10:43.631378  152254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:10:43.631435  152254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:10:43.645682  152254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41217
	I1212 23:10:43.646151  152254 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:10:43.646690  152254 main.go:141] libmachine: Using API Version  1
	I1212 23:10:43.646717  152254 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:10:43.647127  152254 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:10:43.647312  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetMachineName
	I1212 23:10:43.647438  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .DriverName
	I1212 23:10:43.647587  152254 start.go:159] libmachine.API.Create for "ingress-addon-legacy-401709" (driver="kvm2")
	I1212 23:10:43.647620  152254 client.go:168] LocalClient.Create starting
	I1212 23:10:43.647660  152254 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem
	I1212 23:10:43.647698  152254 main.go:141] libmachine: Decoding PEM data...
	I1212 23:10:43.647721  152254 main.go:141] libmachine: Parsing certificate...
	I1212 23:10:43.647782  152254 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem
	I1212 23:10:43.647810  152254 main.go:141] libmachine: Decoding PEM data...
	I1212 23:10:43.647834  152254 main.go:141] libmachine: Parsing certificate...
	I1212 23:10:43.647859  152254 main.go:141] libmachine: Running pre-create checks...
	I1212 23:10:43.647871  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .PreCreateCheck
	I1212 23:10:43.648248  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetConfigRaw
	I1212 23:10:43.648663  152254 main.go:141] libmachine: Creating machine...
	I1212 23:10:43.648680  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .Create
	I1212 23:10:43.648862  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Creating KVM machine...
	I1212 23:10:43.650284  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found existing default KVM network
	I1212 23:10:43.650965  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:43.650786  152309 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a10}
	I1212 23:10:43.656559  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | trying to create private KVM network mk-ingress-addon-legacy-401709 192.168.39.0/24...
	I1212 23:10:43.726295  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | private KVM network mk-ingress-addon-legacy-401709 192.168.39.0/24 created
	I1212 23:10:43.726334  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:43.726236  152309 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:10:43.726348  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Setting up store path in /home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709 ...
	I1212 23:10:43.726374  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Building disk image from file:///home/jenkins/minikube-integration/17777-136241/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 23:10:43.726397  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Downloading /home/jenkins/minikube-integration/17777-136241/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17777-136241/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 23:10:43.963807  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:43.963677  152309 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709/id_rsa...
	I1212 23:10:44.016408  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:44.016235  152309 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709/ingress-addon-legacy-401709.rawdisk...
	I1212 23:10:44.016476  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Writing magic tar header
	I1212 23:10:44.016500  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Writing SSH key tar header
	I1212 23:10:44.016517  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:44.016392  152309 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709 ...
	I1212 23:10:44.016541  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709
	I1212 23:10:44.016601  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709 (perms=drwx------)
	I1212 23:10:44.016623  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube/machines (perms=drwxr-xr-x)
	I1212 23:10:44.016631  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube/machines
	I1212 23:10:44.016643  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:10:44.016651  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241
	I1212 23:10:44.016664  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 23:10:44.016673  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Checking permissions on dir: /home/jenkins
	I1212 23:10:44.016680  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Checking permissions on dir: /home
	I1212 23:10:44.016688  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Skipping /home - not owner
	I1212 23:10:44.016726  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube (perms=drwxr-xr-x)
	I1212 23:10:44.016757  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241 (perms=drwxrwxr-x)
	I1212 23:10:44.016778  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 23:10:44.016793  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 23:10:44.016811  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Creating domain...
	I1212 23:10:44.017691  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) define libvirt domain using xml: 
	I1212 23:10:44.017709  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) <domain type='kvm'>
	I1212 23:10:44.017729  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)   <name>ingress-addon-legacy-401709</name>
	I1212 23:10:44.017739  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)   <memory unit='MiB'>4096</memory>
	I1212 23:10:44.017752  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)   <vcpu>2</vcpu>
	I1212 23:10:44.017761  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)   <features>
	I1212 23:10:44.017775  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <acpi/>
	I1212 23:10:44.017787  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <apic/>
	I1212 23:10:44.017824  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <pae/>
	I1212 23:10:44.017863  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     
	I1212 23:10:44.017879  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)   </features>
	I1212 23:10:44.017895  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)   <cpu mode='host-passthrough'>
	I1212 23:10:44.017928  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)   
	I1212 23:10:44.017953  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)   </cpu>
	I1212 23:10:44.017969  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)   <os>
	I1212 23:10:44.017987  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <type>hvm</type>
	I1212 23:10:44.018003  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <boot dev='cdrom'/>
	I1212 23:10:44.018016  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <boot dev='hd'/>
	I1212 23:10:44.018031  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <bootmenu enable='no'/>
	I1212 23:10:44.018047  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)   </os>
	I1212 23:10:44.018059  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)   <devices>
	I1212 23:10:44.018074  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <disk type='file' device='cdrom'>
	I1212 23:10:44.018100  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)       <source file='/home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709/boot2docker.iso'/>
	I1212 23:10:44.018114  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)       <target dev='hdc' bus='scsi'/>
	I1212 23:10:44.018138  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)       <readonly/>
	I1212 23:10:44.018160  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     </disk>
	I1212 23:10:44.018186  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <disk type='file' device='disk'>
	I1212 23:10:44.018209  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 23:10:44.018234  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)       <source file='/home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709/ingress-addon-legacy-401709.rawdisk'/>
	I1212 23:10:44.018247  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)       <target dev='hda' bus='virtio'/>
	I1212 23:10:44.018262  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     </disk>
	I1212 23:10:44.018273  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <interface type='network'>
	I1212 23:10:44.018295  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)       <source network='mk-ingress-addon-legacy-401709'/>
	I1212 23:10:44.018311  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)       <model type='virtio'/>
	I1212 23:10:44.018326  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     </interface>
	I1212 23:10:44.018339  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <interface type='network'>
	I1212 23:10:44.018354  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)       <source network='default'/>
	I1212 23:10:44.018366  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)       <model type='virtio'/>
	I1212 23:10:44.018386  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     </interface>
	I1212 23:10:44.018413  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <serial type='pty'>
	I1212 23:10:44.018423  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)       <target port='0'/>
	I1212 23:10:44.018433  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     </serial>
	I1212 23:10:44.018448  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <console type='pty'>
	I1212 23:10:44.018462  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)       <target type='serial' port='0'/>
	I1212 23:10:44.018475  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     </console>
	I1212 23:10:44.018487  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     <rng model='virtio'>
	I1212 23:10:44.018507  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)       <backend model='random'>/dev/random</backend>
	I1212 23:10:44.018520  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     </rng>
	I1212 23:10:44.018540  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     
	I1212 23:10:44.018554  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)     
	I1212 23:10:44.018566  152254 main.go:141] libmachine: (ingress-addon-legacy-401709)   </devices>
	I1212 23:10:44.018582  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) </domain>
	I1212 23:10:44.018595  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) 
	I1212 23:10:44.022949  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:8f:93:bc in network default
	I1212 23:10:44.023591  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Ensuring networks are active...
	I1212 23:10:44.023616  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:10:44.024284  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Ensuring network default is active
	I1212 23:10:44.024620  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Ensuring network mk-ingress-addon-legacy-401709 is active
	I1212 23:10:44.025214  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Getting domain xml...
	I1212 23:10:44.025899  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Creating domain...
	I1212 23:10:45.261784  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Waiting to get IP...
	I1212 23:10:45.262528  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:10:45.262995  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:10:45.263036  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:45.262983  152309 retry.go:31] will retry after 225.988526ms: waiting for machine to come up
	I1212 23:10:45.490430  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:10:45.490864  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:10:45.490892  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:45.490828  152309 retry.go:31] will retry after 343.658187ms: waiting for machine to come up
	I1212 23:10:45.836470  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:10:45.836879  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:10:45.836921  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:45.836822  152309 retry.go:31] will retry after 410.899462ms: waiting for machine to come up
	I1212 23:10:46.249543  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:10:46.249987  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:10:46.250020  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:46.249942  152309 retry.go:31] will retry after 475.069341ms: waiting for machine to come up
	I1212 23:10:46.726543  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:10:46.726917  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:10:46.726942  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:46.726877  152309 retry.go:31] will retry after 597.823518ms: waiting for machine to come up
	I1212 23:10:47.326665  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:10:47.327198  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:10:47.327270  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:47.327120  152309 retry.go:31] will retry after 918.571058ms: waiting for machine to come up
	I1212 23:10:48.247327  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:10:48.247737  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:10:48.247768  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:48.247677  152309 retry.go:31] will retry after 751.202583ms: waiting for machine to come up
	I1212 23:10:49.000943  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:10:49.001501  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:10:49.001530  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:49.001444  152309 retry.go:31] will retry after 1.276932507s: waiting for machine to come up
	I1212 23:10:50.279865  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:10:50.280204  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:10:50.280242  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:50.280138  152309 retry.go:31] will retry after 1.349834777s: waiting for machine to come up
	I1212 23:10:51.631586  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:10:51.632056  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:10:51.632084  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:51.631999  152309 retry.go:31] will retry after 2.237628667s: waiting for machine to come up
	I1212 23:10:53.871395  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:10:53.871788  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:10:53.871819  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:53.871729  152309 retry.go:31] will retry after 2.82278934s: waiting for machine to come up
	I1212 23:10:56.697655  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:10:56.698088  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:10:56.698109  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:10:56.698053  152309 retry.go:31] will retry after 3.474559592s: waiting for machine to come up
	I1212 23:11:00.174500  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:00.174828  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:11:00.174873  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:11:00.174773  152309 retry.go:31] will retry after 3.037804816s: waiting for machine to come up
	I1212 23:11:03.215994  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:03.216380  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find current IP address of domain ingress-addon-legacy-401709 in network mk-ingress-addon-legacy-401709
	I1212 23:11:03.216404  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | I1212 23:11:03.216338  152309 retry.go:31] will retry after 4.204947177s: waiting for machine to come up
	I1212 23:11:07.425773  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:07.426211  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Found IP for machine: 192.168.39.68
	I1212 23:11:07.426236  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has current primary IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:07.426244  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Reserving static IP address...
	I1212 23:11:07.426567  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-401709", mac: "52:54:00:e8:19:9d", ip: "192.168.39.68"} in network mk-ingress-addon-legacy-401709
	I1212 23:11:07.497510  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Getting to WaitForSSH function...
	I1212 23:11:07.497582  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Reserved static IP address: 192.168.39.68
	I1212 23:11:07.497600  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Waiting for SSH to be available...
	I1212 23:11:07.500515  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:07.500977  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:07.501006  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:07.501157  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Using SSH client type: external
	I1212 23:11:07.501185  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709/id_rsa (-rw-------)
	I1212 23:11:07.501225  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:11:07.501241  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | About to run SSH command:
	I1212 23:11:07.501260  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | exit 0
	I1212 23:11:07.592099  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | SSH cmd err, output: <nil>: 
	I1212 23:11:07.592329  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) KVM machine creation complete!
	I1212 23:11:07.592675  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetConfigRaw
	I1212 23:11:07.593216  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .DriverName
	I1212 23:11:07.593387  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .DriverName
	I1212 23:11:07.593538  152254 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 23:11:07.593553  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetState
	I1212 23:11:07.594755  152254 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 23:11:07.594774  152254 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 23:11:07.594784  152254 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 23:11:07.594798  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHHostname
	I1212 23:11:07.596817  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:07.597126  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:07.597155  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:07.597260  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHPort
	I1212 23:11:07.597448  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:07.597633  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:07.597758  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHUsername
	I1212 23:11:07.597911  152254 main.go:141] libmachine: Using SSH client type: native
	I1212 23:11:07.598309  152254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I1212 23:11:07.598323  152254 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 23:11:07.715610  152254 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:11:07.715640  152254 main.go:141] libmachine: Detecting the provisioner...
	I1212 23:11:07.715651  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHHostname
	I1212 23:11:07.718644  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:07.718991  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:07.719032  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:07.719176  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHPort
	I1212 23:11:07.719405  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:07.719555  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:07.719703  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHUsername
	I1212 23:11:07.719835  152254 main.go:141] libmachine: Using SSH client type: native
	I1212 23:11:07.720282  152254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I1212 23:11:07.720298  152254 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 23:11:07.841087  152254 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 23:11:07.841158  152254 main.go:141] libmachine: found compatible host: buildroot
	I1212 23:11:07.841170  152254 main.go:141] libmachine: Provisioning with buildroot...
	I1212 23:11:07.841184  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetMachineName
	I1212 23:11:07.841404  152254 buildroot.go:166] provisioning hostname "ingress-addon-legacy-401709"
	I1212 23:11:07.841430  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetMachineName
	I1212 23:11:07.841616  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHHostname
	I1212 23:11:07.844091  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:07.844383  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:07.844423  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:07.844545  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHPort
	I1212 23:11:07.844757  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:07.844901  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:07.845047  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHUsername
	I1212 23:11:07.845218  152254 main.go:141] libmachine: Using SSH client type: native
	I1212 23:11:07.845527  152254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I1212 23:11:07.845540  152254 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-401709 && echo "ingress-addon-legacy-401709" | sudo tee /etc/hostname
	I1212 23:11:07.977107  152254 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-401709
	
	I1212 23:11:07.977154  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHHostname
	I1212 23:11:07.979655  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:07.980044  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:07.980066  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:07.980197  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHPort
	I1212 23:11:07.980416  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:07.980614  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:07.980761  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHUsername
	I1212 23:11:07.980918  152254 main.go:141] libmachine: Using SSH client type: native
	I1212 23:11:07.981257  152254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I1212 23:11:07.981282  152254 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-401709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-401709/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-401709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:11:08.113294  152254 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:11:08.113319  152254 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1212 23:11:08.113334  152254 buildroot.go:174] setting up certificates
	I1212 23:11:08.113345  152254 provision.go:83] configureAuth start
	I1212 23:11:08.113356  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetMachineName
	I1212 23:11:08.113638  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetIP
	I1212 23:11:08.116278  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.116655  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:08.116679  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.116803  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHHostname
	I1212 23:11:08.119012  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.119297  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:08.119334  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.119489  152254 provision.go:138] copyHostCerts
	I1212 23:11:08.119518  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:11:08.119552  152254 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1212 23:11:08.119560  152254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:11:08.119625  152254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1212 23:11:08.119712  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:11:08.119729  152254 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1212 23:11:08.119735  152254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:11:08.119758  152254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1212 23:11:08.119816  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:11:08.119832  152254 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1212 23:11:08.119838  152254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:11:08.119858  152254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1212 23:11:08.119926  152254 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-401709 san=[192.168.39.68 192.168.39.68 localhost 127.0.0.1 minikube ingress-addon-legacy-401709]
	I1212 23:11:08.225750  152254 provision.go:172] copyRemoteCerts
	I1212 23:11:08.225807  152254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:11:08.225829  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHHostname
	I1212 23:11:08.228563  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.228913  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:08.228939  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.229159  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHPort
	I1212 23:11:08.229331  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:08.229508  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHUsername
	I1212 23:11:08.229649  152254 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709/id_rsa Username:docker}
	I1212 23:11:08.318044  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 23:11:08.318112  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:11:08.344239  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 23:11:08.344304  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 23:11:08.370209  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 23:11:08.370281  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:11:08.396339  152254 provision.go:86] duration metric: configureAuth took 282.980461ms
	I1212 23:11:08.396362  152254 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:11:08.396551  152254 config.go:182] Loaded profile config "ingress-addon-legacy-401709": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 23:11:08.396627  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHHostname
	I1212 23:11:08.399338  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.399715  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:08.399754  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.399894  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHPort
	I1212 23:11:08.400090  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:08.400242  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:08.400380  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHUsername
	I1212 23:11:08.400544  152254 main.go:141] libmachine: Using SSH client type: native
	I1212 23:11:08.401019  152254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I1212 23:11:08.401043  152254 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:11:08.704730  152254 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:11:08.704764  152254 main.go:141] libmachine: Checking connection to Docker...
	I1212 23:11:08.704777  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetURL
	I1212 23:11:08.706088  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Using libvirt version 6000000
	I1212 23:11:08.708386  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.708739  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:08.708773  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.708889  152254 main.go:141] libmachine: Docker is up and running!
	I1212 23:11:08.708906  152254 main.go:141] libmachine: Reticulating splines...
	I1212 23:11:08.708915  152254 client.go:171] LocalClient.Create took 25.061280973s
	I1212 23:11:08.708940  152254 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-401709" took 25.061354335s
	I1212 23:11:08.708950  152254 start.go:300] post-start starting for "ingress-addon-legacy-401709" (driver="kvm2")
	I1212 23:11:08.708961  152254 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:11:08.708982  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .DriverName
	I1212 23:11:08.709245  152254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:11:08.709267  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHHostname
	I1212 23:11:08.711647  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.711991  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:08.712019  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.712245  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHPort
	I1212 23:11:08.712461  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:08.712621  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHUsername
	I1212 23:11:08.712790  152254 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709/id_rsa Username:docker}
	I1212 23:11:08.802315  152254 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:11:08.806713  152254 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:11:08.806733  152254 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1212 23:11:08.806792  152254 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1212 23:11:08.806895  152254 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1212 23:11:08.806908  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> /etc/ssl/certs/1435412.pem
	I1212 23:11:08.806996  152254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:11:08.815490  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:11:08.837974  152254 start.go:303] post-start completed in 129.00794ms
	I1212 23:11:08.838033  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetConfigRaw
	I1212 23:11:08.838556  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetIP
	I1212 23:11:08.841189  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.841603  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:08.841630  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.841826  152254 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/config.json ...
	I1212 23:11:08.842000  152254 start.go:128] duration metric: createHost completed in 25.213222977s
	I1212 23:11:08.842021  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHHostname
	I1212 23:11:08.845475  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.845852  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:08.845884  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.846020  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHPort
	I1212 23:11:08.846219  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:08.846368  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:08.846530  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHUsername
	I1212 23:11:08.846706  152254 main.go:141] libmachine: Using SSH client type: native
	I1212 23:11:08.847029  152254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I1212 23:11:08.847040  152254 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:11:08.965178  152254 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422668.948287052
	
	I1212 23:11:08.965205  152254 fix.go:206] guest clock: 1702422668.948287052
	I1212 23:11:08.965215  152254 fix.go:219] Guest: 2023-12-12 23:11:08.948287052 +0000 UTC Remote: 2023-12-12 23:11:08.842010536 +0000 UTC m=+41.033776505 (delta=106.276516ms)
	I1212 23:11:08.965237  152254 fix.go:190] guest clock delta is within tolerance: 106.276516ms
	I1212 23:11:08.965244  152254 start.go:83] releasing machines lock for "ingress-addon-legacy-401709", held for 25.336604052s
	I1212 23:11:08.965271  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .DriverName
	I1212 23:11:08.965575  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetIP
	I1212 23:11:08.968033  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.968300  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:08.968342  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.968465  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .DriverName
	I1212 23:11:08.968946  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .DriverName
	I1212 23:11:08.969109  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .DriverName
	I1212 23:11:08.969195  152254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:11:08.969237  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHHostname
	I1212 23:11:08.969284  152254 ssh_runner.go:195] Run: cat /version.json
	I1212 23:11:08.969307  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHHostname
	I1212 23:11:08.971894  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.972195  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:08.972226  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.972252  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.972370  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHPort
	I1212 23:11:08.972577  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:08.972683  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:08.972718  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:08.972758  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHUsername
	I1212 23:11:08.972858  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHPort
	I1212 23:11:08.972930  152254 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709/id_rsa Username:docker}
	I1212 23:11:08.972995  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:08.973156  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHUsername
	I1212 23:11:08.973270  152254 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709/id_rsa Username:docker}
	I1212 23:11:09.057209  152254 ssh_runner.go:195] Run: systemctl --version
	I1212 23:11:09.086902  152254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:11:09.240526  152254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:11:09.246666  152254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:11:09.246717  152254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:11:09.261808  152254 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:11:09.261833  152254 start.go:475] detecting cgroup driver to use...
	I1212 23:11:09.261899  152254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:11:09.276237  152254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:11:09.289155  152254 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:11:09.289226  152254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:11:09.302555  152254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:11:09.316089  152254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:11:09.417520  152254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:11:09.530944  152254 docker.go:219] disabling docker service ...
	I1212 23:11:09.531014  152254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:11:09.544702  152254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:11:09.555613  152254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:11:09.661569  152254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:11:09.770379  152254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:11:09.782775  152254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:11:09.800408  152254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 23:11:09.800495  152254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:11:09.809758  152254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:11:09.809832  152254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:11:09.818896  152254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:11:09.827730  152254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:11:09.836548  152254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:11:09.846074  152254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:11:09.854309  152254 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:11:09.854383  152254 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:11:09.866814  152254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:11:09.876503  152254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:11:09.984636  152254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:11:10.148086  152254 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:11:10.148173  152254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:11:10.153819  152254 start.go:543] Will wait 60s for crictl version
	I1212 23:11:10.153871  152254 ssh_runner.go:195] Run: which crictl
	I1212 23:11:10.161087  152254 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:11:10.200529  152254 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:11:10.200619  152254 ssh_runner.go:195] Run: crio --version
	I1212 23:11:10.248141  152254 ssh_runner.go:195] Run: crio --version
	I1212 23:11:10.297418  152254 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1212 23:11:10.298811  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetIP
	I1212 23:11:10.301917  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:10.302380  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:10.302421  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:10.302663  152254 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 23:11:10.306961  152254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:11:10.318525  152254 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 23:11:10.318585  152254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:11:10.352146  152254 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 23:11:10.352227  152254 ssh_runner.go:195] Run: which lz4
	I1212 23:11:10.356251  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 23:11:10.356358  152254 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:11:10.360408  152254 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:11:10.360449  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1212 23:11:12.391997  152254 crio.go:444] Took 2.035667 seconds to copy over tarball
	I1212 23:11:12.392066  152254 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:11:15.595383  152254 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.203288297s)
	I1212 23:11:15.595415  152254 crio.go:451] Took 3.203393 seconds to extract the tarball
	I1212 23:11:15.595423  152254 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:11:15.640631  152254 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:11:15.695406  152254 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 23:11:15.695437  152254 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 23:11:15.695507  152254 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:11:15.695548  152254 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 23:11:15.695556  152254 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 23:11:15.695609  152254 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1212 23:11:15.695530  152254 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 23:11:15.695656  152254 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1212 23:11:15.695697  152254 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1212 23:11:15.695700  152254 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 23:11:15.697939  152254 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 23:11:15.698335  152254 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 23:11:15.698467  152254 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 23:11:15.698564  152254 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 23:11:15.698623  152254 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1212 23:11:15.698655  152254 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:11:15.698696  152254 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1212 23:11:15.699269  152254 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 23:11:15.851181  152254 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1212 23:11:15.854199  152254 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1212 23:11:15.863364  152254 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1212 23:11:15.864885  152254 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1212 23:11:15.869415  152254 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 23:11:15.883193  152254 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1212 23:11:15.895910  152254 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 23:11:15.955932  152254 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1212 23:11:15.955981  152254 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 23:11:15.956032  152254 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1212 23:11:15.956068  152254 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 23:11:15.956109  152254 ssh_runner.go:195] Run: which crictl
	I1212 23:11:15.956041  152254 ssh_runner.go:195] Run: which crictl
	I1212 23:11:16.006581  152254 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1212 23:11:16.006624  152254 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1212 23:11:16.006674  152254 ssh_runner.go:195] Run: which crictl
	I1212 23:11:16.026494  152254 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1212 23:11:16.026544  152254 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 23:11:16.026608  152254 ssh_runner.go:195] Run: which crictl
	I1212 23:11:16.037599  152254 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1212 23:11:16.037633  152254 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1212 23:11:16.037651  152254 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 23:11:16.037666  152254 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1212 23:11:16.037702  152254 ssh_runner.go:195] Run: which crictl
	I1212 23:11:16.037713  152254 ssh_runner.go:195] Run: which crictl
	I1212 23:11:16.039028  152254 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 23:11:16.039056  152254 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 23:11:16.039095  152254 ssh_runner.go:195] Run: which crictl
	I1212 23:11:16.039127  152254 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1212 23:11:16.039194  152254 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1212 23:11:16.039218  152254 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1212 23:11:16.039244  152254 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1212 23:11:16.049159  152254 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 23:11:16.049268  152254 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1212 23:11:16.058040  152254 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 23:11:16.204524  152254 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1212 23:11:16.204563  152254 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1212 23:11:16.204619  152254 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1212 23:11:16.204653  152254 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1212 23:11:16.211055  152254 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1212 23:11:16.211101  152254 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1212 23:11:16.211103  152254 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 23:11:16.565238  152254 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:11:16.700722  152254 cache_images.go:92] LoadImages completed in 1.005269047s
	W1212 23:11:16.700852  152254 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1212 23:11:16.700931  152254 ssh_runner.go:195] Run: crio config
	I1212 23:11:16.757947  152254 cni.go:84] Creating CNI manager for ""
	I1212 23:11:16.757970  152254 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:11:16.757990  152254 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:11:16.758014  152254 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.68 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-401709 NodeName:ingress-addon-legacy-401709 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 23:11:16.758253  152254 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-401709"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:11:16.758381  152254 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-401709 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-401709 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:11:16.758452  152254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1212 23:11:16.768772  152254 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:11:16.768854  152254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:11:16.778041  152254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I1212 23:11:16.793999  152254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1212 23:11:16.810112  152254 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1212 23:11:16.826269  152254 ssh_runner.go:195] Run: grep 192.168.39.68	control-plane.minikube.internal$ /etc/hosts
	I1212 23:11:16.830288  152254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.68	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:11:16.841903  152254 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709 for IP: 192.168.39.68
	I1212 23:11:16.841938  152254 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:16.842096  152254 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1212 23:11:16.842148  152254 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1212 23:11:16.842213  152254 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.key
	I1212 23:11:16.842234  152254 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt with IP's: []
	I1212 23:11:17.008171  152254 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt ...
	I1212 23:11:17.008205  152254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: {Name:mk076bc5db259652e7b2f191711f1bab4328658f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:17.008397  152254 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.key ...
	I1212 23:11:17.008413  152254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.key: {Name:mk3ecbe54a2c13768c54d17ac9b0b523073ccea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:17.008540  152254 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.key.83eedb58
	I1212 23:11:17.008561  152254 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.crt.83eedb58 with IP's: [192.168.39.68 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 23:11:17.068910  152254 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.crt.83eedb58 ...
	I1212 23:11:17.068943  152254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.crt.83eedb58: {Name:mk5cdbd29e3b0b1df6f7efc3869386f65a386c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:17.069127  152254 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.key.83eedb58 ...
	I1212 23:11:17.069145  152254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.key.83eedb58: {Name:mk5c9178d74b726ed92cdbbbdd6ffd9965990304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:17.069243  152254 certs.go:337] copying /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.crt.83eedb58 -> /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.crt
	I1212 23:11:17.069345  152254 certs.go:341] copying /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.key.83eedb58 -> /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.key
	I1212 23:11:17.069428  152254 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/proxy-client.key
	I1212 23:11:17.069454  152254 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/proxy-client.crt with IP's: []
	I1212 23:11:17.166857  152254 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/proxy-client.crt ...
	I1212 23:11:17.166892  152254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/proxy-client.crt: {Name:mkc492e7229c85930de19b607c8d58df50b286d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:17.167077  152254 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/proxy-client.key ...
	I1212 23:11:17.167094  152254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/proxy-client.key: {Name:mk57c0d904882ef491617ba58e3f23c579add9ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:17.167189  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 23:11:17.167214  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 23:11:17.167230  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 23:11:17.167248  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 23:11:17.167273  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:11:17.167292  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:11:17.167310  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:11:17.167335  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:11:17.167394  152254 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1212 23:11:17.167439  152254 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1212 23:11:17.167463  152254 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:11:17.167499  152254 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:11:17.167540  152254 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:11:17.167574  152254 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1212 23:11:17.167633  152254 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:11:17.167684  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:11:17.167705  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem -> /usr/share/ca-certificates/143541.pem
	I1212 23:11:17.167723  152254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> /usr/share/ca-certificates/1435412.pem
	I1212 23:11:17.168324  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:11:17.192064  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:11:17.214192  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:11:17.236391  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:11:17.258475  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:11:17.280019  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:11:17.303245  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:11:17.325474  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:11:17.347453  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:11:17.370163  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1212 23:11:17.394716  152254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1212 23:11:17.417406  152254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:11:17.433613  152254 ssh_runner.go:195] Run: openssl version
	I1212 23:11:17.439297  152254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1212 23:11:17.449818  152254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1212 23:11:17.454456  152254 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1212 23:11:17.454510  152254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1212 23:11:17.459924  152254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:11:17.470513  152254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:11:17.481176  152254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:11:17.485723  152254 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:11:17.485783  152254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:11:17.491289  152254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:11:17.501870  152254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1212 23:11:17.513272  152254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1212 23:11:17.518111  152254 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1212 23:11:17.518166  152254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1212 23:11:17.523875  152254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1212 23:11:17.534542  152254 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:11:17.539122  152254 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:11:17.539170  152254 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-401709 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-401709 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:11:17.539293  152254 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:11:17.539349  152254 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:11:17.589999  152254 cri.go:89] found id: ""
	I1212 23:11:17.590082  152254 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:11:17.601971  152254 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:11:17.613497  152254 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:11:17.625124  152254 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:11:17.625173  152254 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1212 23:11:17.692741  152254 kubeadm.go:322] W1212 23:11:17.685345     955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1212 23:11:17.838604  152254 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:11:21.190688  152254 kubeadm.go:322] W1212 23:11:21.184890     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 23:11:21.193764  152254 kubeadm.go:322] W1212 23:11:21.186664     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 23:11:31.260503  152254 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1212 23:11:31.260588  152254 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:11:31.260687  152254 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:11:31.260790  152254 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:11:31.260909  152254 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:11:31.261036  152254 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:11:31.261148  152254 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:11:31.261205  152254 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:11:31.261289  152254 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:11:31.262704  152254 out.go:204]   - Generating certificates and keys ...
	I1212 23:11:31.262799  152254 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:11:31.262875  152254 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:11:31.262960  152254 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:11:31.263039  152254 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:11:31.263134  152254 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 23:11:31.263208  152254 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 23:11:31.263287  152254 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 23:11:31.263440  152254 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-401709 localhost] and IPs [192.168.39.68 127.0.0.1 ::1]
	I1212 23:11:31.263521  152254 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 23:11:31.263661  152254 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-401709 localhost] and IPs [192.168.39.68 127.0.0.1 ::1]
	I1212 23:11:31.263724  152254 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:11:31.263806  152254 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:11:31.263889  152254 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 23:11:31.263942  152254 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:11:31.264000  152254 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:11:31.264090  152254 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:11:31.264180  152254 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:11:31.264267  152254 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:11:31.264352  152254 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:11:31.265737  152254 out.go:204]   - Booting up control plane ...
	I1212 23:11:31.265832  152254 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:11:31.265899  152254 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:11:31.265963  152254 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:11:31.266058  152254 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:11:31.266244  152254 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:11:31.266313  152254 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503804 seconds
	I1212 23:11:31.266398  152254 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:11:31.266504  152254 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:11:31.266559  152254 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:11:31.266670  152254 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-401709 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1212 23:11:31.266720  152254 kubeadm.go:322] [bootstrap-token] Using token: 8rlbkq.f51ih06yo61mguri
	I1212 23:11:31.268257  152254 out.go:204]   - Configuring RBAC rules ...
	I1212 23:11:31.268347  152254 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:11:31.268415  152254 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:11:31.268561  152254 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:11:31.268675  152254 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:11:31.268775  152254 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:11:31.268855  152254 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:11:31.268952  152254 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:11:31.269003  152254 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:11:31.269041  152254 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:11:31.269051  152254 kubeadm.go:322] 
	I1212 23:11:31.269110  152254 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:11:31.269119  152254 kubeadm.go:322] 
	I1212 23:11:31.269179  152254 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:11:31.269188  152254 kubeadm.go:322] 
	I1212 23:11:31.269213  152254 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:11:31.269277  152254 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:11:31.269339  152254 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:11:31.269345  152254 kubeadm.go:322] 
	I1212 23:11:31.269407  152254 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:11:31.269535  152254 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:11:31.269619  152254 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:11:31.269627  152254 kubeadm.go:322] 
	I1212 23:11:31.269707  152254 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:11:31.269804  152254 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:11:31.269813  152254 kubeadm.go:322] 
	I1212 23:11:31.269915  152254 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8rlbkq.f51ih06yo61mguri \
	I1212 23:11:31.270006  152254 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1212 23:11:31.270028  152254 kubeadm.go:322]     --control-plane 
	I1212 23:11:31.270031  152254 kubeadm.go:322] 
	I1212 23:11:31.270098  152254 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:11:31.270104  152254 kubeadm.go:322] 
	I1212 23:11:31.270205  152254 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8rlbkq.f51ih06yo61mguri \
	I1212 23:11:31.270315  152254 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1212 23:11:31.270328  152254 cni.go:84] Creating CNI manager for ""
	I1212 23:11:31.270334  152254 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:11:31.271800  152254 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:11:31.272964  152254 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:11:31.283114  152254 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:11:31.305600  152254 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:11:31.305718  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:31.305727  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=ingress-addon-legacy-401709 minikube.k8s.io/updated_at=2023_12_12T23_11_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:31.537538  152254 ops.go:34] apiserver oom_adj: -16
	I1212 23:11:31.537585  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:31.754838  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:32.371682  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:32.871655  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:33.371689  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:33.871081  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:34.371278  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:34.871220  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:35.371130  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:35.872080  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:36.371054  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:36.871294  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:37.372065  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:37.872111  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:38.371240  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:38.871681  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:39.371254  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:39.871089  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:40.371442  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:40.872020  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:41.372045  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:41.871224  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:42.371134  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:42.871346  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:43.371270  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:43.871151  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:44.371751  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:44.871088  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:45.371893  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:45.872027  152254 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:11:46.067364  152254 kubeadm.go:1088] duration metric: took 14.761722734s to wait for elevateKubeSystemPrivileges.
	I1212 23:11:46.067410  152254 kubeadm.go:406] StartCluster complete in 28.528242006s
	I1212 23:11:46.067436  152254 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:46.067522  152254 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:11:46.068217  152254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:11:46.068590  152254 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:11:46.068672  152254 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:11:46.068751  152254 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-401709"
	I1212 23:11:46.068767  152254 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-401709"
	I1212 23:11:46.068802  152254 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-401709"
	I1212 23:11:46.068822  152254 config.go:182] Loaded profile config "ingress-addon-legacy-401709": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 23:11:46.068777  152254 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-401709"
	I1212 23:11:46.069047  152254 host.go:66] Checking if "ingress-addon-legacy-401709" exists ...
	I1212 23:11:46.069231  152254 kapi.go:59] client config for ingress-addon-legacy-401709: &rest.Config{Host:"https://192.168.39.68:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:11:46.069340  152254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:11:46.069357  152254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:11:46.069380  152254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:11:46.069381  152254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:11:46.069990  152254 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 23:11:46.084962  152254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41391
	I1212 23:11:46.085367  152254 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:11:46.085840  152254 main.go:141] libmachine: Using API Version  1
	I1212 23:11:46.085859  152254 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:11:46.086220  152254 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:11:46.086394  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetState
	I1212 23:11:46.087823  152254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34971
	I1212 23:11:46.088256  152254 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:11:46.088773  152254 main.go:141] libmachine: Using API Version  1
	I1212 23:11:46.088798  152254 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:11:46.089183  152254 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:11:46.089280  152254 kapi.go:59] client config for ingress-addon-legacy-401709: &rest.Config{Host:"https://192.168.39.68:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:11:46.089621  152254 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-401709"
	I1212 23:11:46.089662  152254 host.go:66] Checking if "ingress-addon-legacy-401709" exists ...
	I1212 23:11:46.089786  152254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:11:46.089828  152254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:11:46.090152  152254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:11:46.090188  152254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:11:46.104224  152254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44047
	I1212 23:11:46.104440  152254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I1212 23:11:46.104626  152254 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:11:46.104816  152254 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:11:46.105084  152254 main.go:141] libmachine: Using API Version  1
	I1212 23:11:46.105102  152254 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:11:46.105253  152254 main.go:141] libmachine: Using API Version  1
	I1212 23:11:46.105273  152254 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:11:46.105435  152254 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:11:46.105587  152254 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:11:46.105742  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetState
	I1212 23:11:46.106073  152254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:11:46.106113  152254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:11:46.107401  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .DriverName
	I1212 23:11:46.109705  152254 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:11:46.110878  152254 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:11:46.110894  152254 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:11:46.110912  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHHostname
	I1212 23:11:46.114079  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:46.114548  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:46.114591  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:46.114841  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHPort
	I1212 23:11:46.115043  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:46.115208  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHUsername
	I1212 23:11:46.115383  152254 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709/id_rsa Username:docker}
	I1212 23:11:46.121550  152254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I1212 23:11:46.121994  152254 main.go:141] libmachine: () Calling .GetVersion
	W1212 23:11:46.122081  152254 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-401709" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1212 23:11:46.122105  152254 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1212 23:11:46.122137  152254 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:11:46.123749  152254 out.go:177] * Verifying Kubernetes components...
	I1212 23:11:46.122481  152254 main.go:141] libmachine: Using API Version  1
	I1212 23:11:46.125293  152254 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:11:46.125353  152254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:11:46.125708  152254 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:11:46.125902  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetState
	I1212 23:11:46.127607  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .DriverName
	I1212 23:11:46.128000  152254 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:11:46.128022  152254 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:11:46.128040  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHHostname
	I1212 23:11:46.131143  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:46.131573  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:19:9d", ip: ""} in network mk-ingress-addon-legacy-401709: {Iface:virbr1 ExpiryTime:2023-12-13 00:10:59 +0000 UTC Type:0 Mac:52:54:00:e8:19:9d Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ingress-addon-legacy-401709 Clientid:01:52:54:00:e8:19:9d}
	I1212 23:11:46.131602  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | domain ingress-addon-legacy-401709 has defined IP address 192.168.39.68 and MAC address 52:54:00:e8:19:9d in network mk-ingress-addon-legacy-401709
	I1212 23:11:46.131870  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHPort
	I1212 23:11:46.132067  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHKeyPath
	I1212 23:11:46.132213  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .GetSSHUsername
	I1212 23:11:46.132356  152254 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/ingress-addon-legacy-401709/id_rsa Username:docker}
	I1212 23:11:46.245508  152254 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:11:46.245743  152254 kapi.go:59] client config for ingress-addon-legacy-401709: &rest.Config{Host:"https://192.168.39.68:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:11:46.246038  152254 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-401709" to be "Ready" ...
	I1212 23:11:46.250442  152254 node_ready.go:49] node "ingress-addon-legacy-401709" has status "Ready":"True"
	I1212 23:11:46.250459  152254 node_ready.go:38] duration metric: took 4.393412ms waiting for node "ingress-addon-legacy-401709" to be "Ready" ...
	I1212 23:11:46.250467  152254 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:11:46.266942  152254 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace to be "Ready" ...
	I1212 23:11:46.270895  152254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:11:46.339007  152254 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:11:46.737680  152254 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 23:11:46.937591  152254 main.go:141] libmachine: Making call to close driver server
	I1212 23:11:46.937626  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .Close
	I1212 23:11:46.937954  152254 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:11:46.937975  152254 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:11:46.937987  152254 main.go:141] libmachine: Making call to close driver server
	I1212 23:11:46.938005  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .Close
	I1212 23:11:46.938035  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Closing plugin on server side
	I1212 23:11:46.938239  152254 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:11:46.938257  152254 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:11:46.953060  152254 main.go:141] libmachine: Making call to close driver server
	I1212 23:11:46.953088  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .Close
	I1212 23:11:46.953313  152254 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:11:46.953330  152254 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:11:46.953355  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Closing plugin on server side
	I1212 23:11:47.014311  152254 main.go:141] libmachine: Making call to close driver server
	I1212 23:11:47.014341  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .Close
	I1212 23:11:47.014616  152254 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:11:47.014633  152254 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:11:47.014642  152254 main.go:141] libmachine: Making call to close driver server
	I1212 23:11:47.014650  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) Calling .Close
	I1212 23:11:47.014866  152254 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:11:47.014898  152254 main.go:141] libmachine: (ingress-addon-legacy-401709) DBG | Closing plugin on server side
	I1212 23:11:47.014901  152254 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:11:47.017096  152254 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1212 23:11:47.019091  152254 addons.go:502] enable addons completed in 950.42053ms: enabled=[default-storageclass storage-provisioner]
	I1212 23:11:48.320038  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:11:50.817406  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:11:52.819213  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:11:55.318659  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:11:57.318905  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:11:59.818796  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:12:02.317738  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:12:04.817555  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:12:06.819855  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:12:09.317821  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:12:11.317874  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:12:13.838011  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:12:16.318166  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:12:18.818396  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:12:21.318088  152254 pod_ready.go:102] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:12:22.319029  152254 pod_ready.go:92] pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace has status "Ready":"True"
	I1212 23:12:22.319050  152254 pod_ready.go:81] duration metric: took 36.05208496s waiting for pod "coredns-66bff467f8-dx8qr" in "kube-system" namespace to be "Ready" ...
	I1212 23:12:22.319059  152254 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-v6xbl" in "kube-system" namespace to be "Ready" ...
	I1212 23:12:24.337177  152254 pod_ready.go:102] pod "coredns-66bff467f8-v6xbl" in "kube-system" namespace has status "Ready":"False"
	I1212 23:12:25.836609  152254 pod_ready.go:92] pod "coredns-66bff467f8-v6xbl" in "kube-system" namespace has status "Ready":"True"
	I1212 23:12:25.836636  152254 pod_ready.go:81] duration metric: took 3.517570045s waiting for pod "coredns-66bff467f8-v6xbl" in "kube-system" namespace to be "Ready" ...
	I1212 23:12:25.836648  152254 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-401709" in "kube-system" namespace to be "Ready" ...
	I1212 23:12:25.840770  152254 pod_ready.go:92] pod "etcd-ingress-addon-legacy-401709" in "kube-system" namespace has status "Ready":"True"
	I1212 23:12:25.840794  152254 pod_ready.go:81] duration metric: took 4.13668ms waiting for pod "etcd-ingress-addon-legacy-401709" in "kube-system" namespace to be "Ready" ...
	I1212 23:12:25.840805  152254 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-401709" in "kube-system" namespace to be "Ready" ...
	I1212 23:12:25.845795  152254 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-401709" in "kube-system" namespace has status "Ready":"True"
	I1212 23:12:25.845812  152254 pod_ready.go:81] duration metric: took 4.998084ms waiting for pod "kube-apiserver-ingress-addon-legacy-401709" in "kube-system" namespace to be "Ready" ...
	I1212 23:12:25.845821  152254 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-401709" in "kube-system" namespace to be "Ready" ...
	I1212 23:12:25.850227  152254 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-401709" in "kube-system" namespace has status "Ready":"True"
	I1212 23:12:25.850242  152254 pod_ready.go:81] duration metric: took 4.416018ms waiting for pod "kube-controller-manager-ingress-addon-legacy-401709" in "kube-system" namespace to be "Ready" ...
	I1212 23:12:25.850249  152254 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sh8jc" in "kube-system" namespace to be "Ready" ...
	I1212 23:12:25.912053  152254 request.go:629] Waited for 59.230669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ingress-addon-legacy-401709
	I1212 23:12:25.915762  152254 pod_ready.go:92] pod "kube-proxy-sh8jc" in "kube-system" namespace has status "Ready":"True"
	I1212 23:12:25.915782  152254 pod_ready.go:81] duration metric: took 65.527027ms waiting for pod "kube-proxy-sh8jc" in "kube-system" namespace to be "Ready" ...
	I1212 23:12:25.915790  152254 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-401709" in "kube-system" namespace to be "Ready" ...
	I1212 23:12:26.111957  152254 request.go:629] Waited for 196.09651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-401709
	I1212 23:12:26.312636  152254 request.go:629] Waited for 197.379562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes/ingress-addon-legacy-401709
	I1212 23:12:26.315912  152254 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-401709" in "kube-system" namespace has status "Ready":"True"
	I1212 23:12:26.315935  152254 pod_ready.go:81] duration metric: took 400.139035ms waiting for pod "kube-scheduler-ingress-addon-legacy-401709" in "kube-system" namespace to be "Ready" ...
	I1212 23:12:26.315944  152254 pod_ready.go:38] duration metric: took 40.065468746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:12:26.315961  152254 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:12:26.316019  152254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:12:26.329693  152254 api_server.go:72] duration metric: took 40.207518722s to wait for apiserver process to appear ...
	I1212 23:12:26.329714  152254 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:12:26.329729  152254 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I1212 23:12:26.335640  152254 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I1212 23:12:26.336731  152254 api_server.go:141] control plane version: v1.18.20
	I1212 23:12:26.336752  152254 api_server.go:131] duration metric: took 7.032503ms to wait for apiserver health ...
	I1212 23:12:26.336759  152254 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:12:26.512182  152254 request.go:629] Waited for 175.339187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I1212 23:12:26.519784  152254 system_pods.go:59] 8 kube-system pods found
	I1212 23:12:26.519815  152254 system_pods.go:61] "coredns-66bff467f8-dx8qr" [f7a144f4-46da-4ed1-bf80-a9ad58810c4f] Running
	I1212 23:12:26.519822  152254 system_pods.go:61] "coredns-66bff467f8-v6xbl" [c72a33b4-030f-4328-9e98-5bac512a523a] Running
	I1212 23:12:26.519828  152254 system_pods.go:61] "etcd-ingress-addon-legacy-401709" [4bb25672-fafa-45c8-a94f-3e27cd72216a] Running
	I1212 23:12:26.519836  152254 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-401709" [6d3a3e21-03ca-442c-954d-42b36743bf4a] Running
	I1212 23:12:26.519843  152254 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-401709" [df174b00-5887-4e00-bfe6-b7ab5935ea6c] Running
	I1212 23:12:26.519853  152254 system_pods.go:61] "kube-proxy-sh8jc" [8159c498-b057-4d17-a93d-2c404334c1a5] Running
	I1212 23:12:26.519864  152254 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-401709" [f6cd379d-12f5-42e6-af00-d92e343697b4] Running
	I1212 23:12:26.519876  152254 system_pods.go:61] "storage-provisioner" [aa18af60-3887-47cb-84b1-6b005d7c3d33] Running
	I1212 23:12:26.519888  152254 system_pods.go:74] duration metric: took 183.121587ms to wait for pod list to return data ...
	I1212 23:12:26.519901  152254 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:12:26.712345  152254 request.go:629] Waited for 192.35816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:12:26.715191  152254 default_sa.go:45] found service account: "default"
	I1212 23:12:26.715213  152254 default_sa.go:55] duration metric: took 195.302486ms for default service account to be created ...
	I1212 23:12:26.715220  152254 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:12:26.912634  152254 request.go:629] Waited for 197.343294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/namespaces/kube-system/pods
	I1212 23:12:26.918743  152254 system_pods.go:86] 8 kube-system pods found
	I1212 23:12:26.918769  152254 system_pods.go:89] "coredns-66bff467f8-dx8qr" [f7a144f4-46da-4ed1-bf80-a9ad58810c4f] Running
	I1212 23:12:26.918774  152254 system_pods.go:89] "coredns-66bff467f8-v6xbl" [c72a33b4-030f-4328-9e98-5bac512a523a] Running
	I1212 23:12:26.918778  152254 system_pods.go:89] "etcd-ingress-addon-legacy-401709" [4bb25672-fafa-45c8-a94f-3e27cd72216a] Running
	I1212 23:12:26.918782  152254 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-401709" [6d3a3e21-03ca-442c-954d-42b36743bf4a] Running
	I1212 23:12:26.918786  152254 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-401709" [df174b00-5887-4e00-bfe6-b7ab5935ea6c] Running
	I1212 23:12:26.918790  152254 system_pods.go:89] "kube-proxy-sh8jc" [8159c498-b057-4d17-a93d-2c404334c1a5] Running
	I1212 23:12:26.918794  152254 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-401709" [f6cd379d-12f5-42e6-af00-d92e343697b4] Running
	I1212 23:12:26.918797  152254 system_pods.go:89] "storage-provisioner" [aa18af60-3887-47cb-84b1-6b005d7c3d33] Running
	I1212 23:12:26.918803  152254 system_pods.go:126] duration metric: took 203.578101ms to wait for k8s-apps to be running ...
	I1212 23:12:26.918813  152254 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:12:26.918852  152254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:12:26.932137  152254 system_svc.go:56] duration metric: took 13.3173ms WaitForService to wait for kubelet.
	I1212 23:12:26.932162  152254 kubeadm.go:581] duration metric: took 40.809993793s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:12:26.932179  152254 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:12:27.112645  152254 request.go:629] Waited for 180.372489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.68:8443/api/v1/nodes
	I1212 23:12:27.115544  152254 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:12:27.115573  152254 node_conditions.go:123] node cpu capacity is 2
	I1212 23:12:27.115588  152254 node_conditions.go:105] duration metric: took 183.403771ms to run NodePressure ...
	I1212 23:12:27.115603  152254 start.go:228] waiting for startup goroutines ...
	I1212 23:12:27.115612  152254 start.go:233] waiting for cluster config update ...
	I1212 23:12:27.115625  152254 start.go:242] writing updated cluster config ...
	I1212 23:12:27.115877  152254 ssh_runner.go:195] Run: rm -f paused
	I1212 23:12:27.163003  152254 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1212 23:12:27.164947  152254 out.go:177] 
	W1212 23:12:27.166586  152254 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1212 23:12:27.168071  152254 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1212 23:12:27.169463  152254 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-401709" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:10:55 UTC, ends at Tue 2023-12-12 23:15:35 UTC. --
	Dec 12 23:15:34 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:34.930536139Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702422934930517788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=44769a3a-15f2-4b28-bce5-a62089097376 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:15:34 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:34.931436693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2c72ed51-0d59-4718-93f2-32ec9775d037 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:15:34 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:34.931488679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2c72ed51-0d59-4718-93f2-32ec9775d037 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:15:34 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:34.931776823Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa8d6fdabbb2e9abbe283b66818a33c763bd3854eb4e8bfa060f9f1e92dbec6e,PodSandboxId:652552263cebfbf2b92759b23a042bbe0959cdd120e8bfa75999e6d1aa94e0f0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702422927526708134,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-5mpp7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1587c6a0-b9aa-4767-bea2-ad25996b57ed,},Annotations:map[string]string{io.kubernetes.container.hash: 19dbc10c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339cc74758c97aaec6b0e06637021052046905e5b9b39be5a012730a1c67f951,PodSandboxId:0b49d92a042da537ac85715c38cddab127d2931fced3fff559d28d482d28e750,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702422787135522411,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84cf89ad-bc4e-4b24-a63d-6e00b0c634f7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 12436308,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1177cd5f98acd506c121e1857985699cadbe9429b01500ca5839633fb7937c3,PodSandboxId:1bfd3c7fe5571858f66b1fc78cc6cd154012256b936ce7f5fc8c6e65ccdc7882,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702422737870438319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: aa18af60-3887-47cb-84b1-6b005d7c3d33,},Annotations:map[string]string{io.kubernetes.container.hash: 45703f03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be18e38e9c74256797978e3fdb76175fbf3d7bafc4263a15d23926dc98c26cdc,PodSandboxId:c4aae657927224af592cd97dcdf2a53f9102ebe2d47e61d331a7ed239b9ededc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1702422708532917666,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sh8jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8159c498-b057-4d17-a93d-2c4043
34c1a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6f185d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d92255f8a61a8273ad4d9ff05b2f84c4ed2bb70de463aba2ce2557ee928e5d8,PodSandboxId:8268975b54204e24128154f1a1ec74a5c3c023a451bba625eeccbb332d8fded6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702422708188882071,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-v6xbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72a33b4-030f-4328-9e98-5bac512a523a,},Annotations:map[string]string{io
.kubernetes.container.hash: 1db1fe65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e01bbb4a279930a8cd14e5618aedafdd824196e8127a7d77143bfa01d071cfa,PodSandboxId:9539f8d8657ed52861342090084c73eb55f7988bb762b08d4288770b0472bf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702422708129394051,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-66bff467f8-dx8qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a144f4-46da-4ed1-bf80-a9ad58810c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 1db1fe65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eedd9d442bdd13ee69c58a2b67952c799db7c92b206b24975e8c92b192b9d6c4,PodSandboxId:1bfd3c7fe5571858f66b1fc78cc6cd154012256b936ce7f5fc8c6e65ccdc7882,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s
-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702422707664134931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa18af60-3887-47cb-84b1-6b005d7c3d33,},Annotations:map[string]string{io.kubernetes.container.hash: 45703f03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4520c46e577e942917f1550bda35379db52d7931ac526b27dac9908a477d97f,PodSandboxId:7c6ecdc40669685695d3f37ab7be60e5f706929a7ae57458436e51fbb3310386,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198
ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1702422684086185418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3daddf901be09cbb13bf2cf79c54d62,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6d7530,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0c62d5deb8245e9d922e9527ba7326643209d0fbd3560d2cd1364ebce79e2e5,PodSandboxId:392767bf4f60686a97ecaf76c9e10b42221c8a9b3a32975a6e4a81610a720493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd059
3ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1702422682982445170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28fbf77328ae1537e9e604be434241ff,},Annotations:map[string]string{io.kubernetes.container.hash: 15a792a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683c50f3c552ed90f3b8e7541338c95c5780471206b87f45c7cc1aca45a41049,PodSandboxId:e5082426d931e042542334d8b0d64d4a25002cad2b0fcd2e1551b0def25ec2d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff
81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1702422682691046086,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8f6ab8047e991772e5e69594295c1a2aa24080909c6acdc36312ec66f4e04f,PodSandboxId:29723e3a42d249f6b4cc6027644ee023f9aad3099134c4ce7465968a5ec6fd05,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73
a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1702422682715811078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2c72ed51-0d59-4718-93f2-32ec9775d037 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:15:34 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:34.971479784Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=17cecd3c-627b-4c58-ae05-67c9f74c8a7e name=/runtime.v1.RuntimeService/Version
	Dec 12 23:15:34 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:34.971537509Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=17cecd3c-627b-4c58-ae05-67c9f74c8a7e name=/runtime.v1.RuntimeService/Version
	Dec 12 23:15:34 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:34.972543211Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1db0701d-a5fd-493c-af70-cbd63b405877 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:15:34 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:34.973152223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702422934973133738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=1db0701d-a5fd-493c-af70-cbd63b405877 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:15:34 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:34.973636781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=da84d3e8-dc07-4c98-a60c-e020440f0312 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:15:34 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:34.973683518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=da84d3e8-dc07-4c98-a60c-e020440f0312 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:15:34 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:34.973898663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa8d6fdabbb2e9abbe283b66818a33c763bd3854eb4e8bfa060f9f1e92dbec6e,PodSandboxId:652552263cebfbf2b92759b23a042bbe0959cdd120e8bfa75999e6d1aa94e0f0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702422927526708134,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-5mpp7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1587c6a0-b9aa-4767-bea2-ad25996b57ed,},Annotations:map[string]string{io.kubernetes.container.hash: 19dbc10c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339cc74758c97aaec6b0e06637021052046905e5b9b39be5a012730a1c67f951,PodSandboxId:0b49d92a042da537ac85715c38cddab127d2931fced3fff559d28d482d28e750,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702422787135522411,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84cf89ad-bc4e-4b24-a63d-6e00b0c634f7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 12436308,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1177cd5f98acd506c121e1857985699cadbe9429b01500ca5839633fb7937c3,PodSandboxId:1bfd3c7fe5571858f66b1fc78cc6cd154012256b936ce7f5fc8c6e65ccdc7882,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702422737870438319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: aa18af60-3887-47cb-84b1-6b005d7c3d33,},Annotations:map[string]string{io.kubernetes.container.hash: 45703f03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be18e38e9c74256797978e3fdb76175fbf3d7bafc4263a15d23926dc98c26cdc,PodSandboxId:c4aae657927224af592cd97dcdf2a53f9102ebe2d47e61d331a7ed239b9ededc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1702422708532917666,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sh8jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8159c498-b057-4d17-a93d-2c4043
34c1a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6f185d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d92255f8a61a8273ad4d9ff05b2f84c4ed2bb70de463aba2ce2557ee928e5d8,PodSandboxId:8268975b54204e24128154f1a1ec74a5c3c023a451bba625eeccbb332d8fded6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702422708188882071,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-v6xbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72a33b4-030f-4328-9e98-5bac512a523a,},Annotations:map[string]string{io
.kubernetes.container.hash: 1db1fe65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e01bbb4a279930a8cd14e5618aedafdd824196e8127a7d77143bfa01d071cfa,PodSandboxId:9539f8d8657ed52861342090084c73eb55f7988bb762b08d4288770b0472bf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702422708129394051,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-66bff467f8-dx8qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a144f4-46da-4ed1-bf80-a9ad58810c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 1db1fe65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eedd9d442bdd13ee69c58a2b67952c799db7c92b206b24975e8c92b192b9d6c4,PodSandboxId:1bfd3c7fe5571858f66b1fc78cc6cd154012256b936ce7f5fc8c6e65ccdc7882,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s
-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702422707664134931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa18af60-3887-47cb-84b1-6b005d7c3d33,},Annotations:map[string]string{io.kubernetes.container.hash: 45703f03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4520c46e577e942917f1550bda35379db52d7931ac526b27dac9908a477d97f,PodSandboxId:7c6ecdc40669685695d3f37ab7be60e5f706929a7ae57458436e51fbb3310386,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198
ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1702422684086185418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3daddf901be09cbb13bf2cf79c54d62,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6d7530,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0c62d5deb8245e9d922e9527ba7326643209d0fbd3560d2cd1364ebce79e2e5,PodSandboxId:392767bf4f60686a97ecaf76c9e10b42221c8a9b3a32975a6e4a81610a720493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd059
3ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1702422682982445170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28fbf77328ae1537e9e604be434241ff,},Annotations:map[string]string{io.kubernetes.container.hash: 15a792a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683c50f3c552ed90f3b8e7541338c95c5780471206b87f45c7cc1aca45a41049,PodSandboxId:e5082426d931e042542334d8b0d64d4a25002cad2b0fcd2e1551b0def25ec2d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff
81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1702422682691046086,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8f6ab8047e991772e5e69594295c1a2aa24080909c6acdc36312ec66f4e04f,PodSandboxId:29723e3a42d249f6b4cc6027644ee023f9aad3099134c4ce7465968a5ec6fd05,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73
a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1702422682715811078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=da84d3e8-dc07-4c98-a60c-e020440f0312 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.014211698Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1a648c5d-4d37-4072-b624-d1b59a61314f name=/runtime.v1.RuntimeService/Version
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.014268399Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1a648c5d-4d37-4072-b624-d1b59a61314f name=/runtime.v1.RuntimeService/Version
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.015768645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=96b434b7-68f9-4e8c-b5b5-a8c71fd98f64 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.016387420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702422935016369567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=96b434b7-68f9-4e8c-b5b5-a8c71fd98f64 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.016854468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c8ec4239-4e59-4091-be45-6b621ec5993f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.016901035Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c8ec4239-4e59-4091-be45-6b621ec5993f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.017177823Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa8d6fdabbb2e9abbe283b66818a33c763bd3854eb4e8bfa060f9f1e92dbec6e,PodSandboxId:652552263cebfbf2b92759b23a042bbe0959cdd120e8bfa75999e6d1aa94e0f0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702422927526708134,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-5mpp7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1587c6a0-b9aa-4767-bea2-ad25996b57ed,},Annotations:map[string]string{io.kubernetes.container.hash: 19dbc10c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339cc74758c97aaec6b0e06637021052046905e5b9b39be5a012730a1c67f951,PodSandboxId:0b49d92a042da537ac85715c38cddab127d2931fced3fff559d28d482d28e750,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702422787135522411,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84cf89ad-bc4e-4b24-a63d-6e00b0c634f7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 12436308,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1177cd5f98acd506c121e1857985699cadbe9429b01500ca5839633fb7937c3,PodSandboxId:1bfd3c7fe5571858f66b1fc78cc6cd154012256b936ce7f5fc8c6e65ccdc7882,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702422737870438319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: aa18af60-3887-47cb-84b1-6b005d7c3d33,},Annotations:map[string]string{io.kubernetes.container.hash: 45703f03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be18e38e9c74256797978e3fdb76175fbf3d7bafc4263a15d23926dc98c26cdc,PodSandboxId:c4aae657927224af592cd97dcdf2a53f9102ebe2d47e61d331a7ed239b9ededc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1702422708532917666,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sh8jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8159c498-b057-4d17-a93d-2c4043
34c1a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6f185d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d92255f8a61a8273ad4d9ff05b2f84c4ed2bb70de463aba2ce2557ee928e5d8,PodSandboxId:8268975b54204e24128154f1a1ec74a5c3c023a451bba625eeccbb332d8fded6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702422708188882071,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-v6xbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72a33b4-030f-4328-9e98-5bac512a523a,},Annotations:map[string]string{io
.kubernetes.container.hash: 1db1fe65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e01bbb4a279930a8cd14e5618aedafdd824196e8127a7d77143bfa01d071cfa,PodSandboxId:9539f8d8657ed52861342090084c73eb55f7988bb762b08d4288770b0472bf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702422708129394051,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-66bff467f8-dx8qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a144f4-46da-4ed1-bf80-a9ad58810c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 1db1fe65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eedd9d442bdd13ee69c58a2b67952c799db7c92b206b24975e8c92b192b9d6c4,PodSandboxId:1bfd3c7fe5571858f66b1fc78cc6cd154012256b936ce7f5fc8c6e65ccdc7882,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s
-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702422707664134931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa18af60-3887-47cb-84b1-6b005d7c3d33,},Annotations:map[string]string{io.kubernetes.container.hash: 45703f03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4520c46e577e942917f1550bda35379db52d7931ac526b27dac9908a477d97f,PodSandboxId:7c6ecdc40669685695d3f37ab7be60e5f706929a7ae57458436e51fbb3310386,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198
ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1702422684086185418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3daddf901be09cbb13bf2cf79c54d62,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6d7530,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0c62d5deb8245e9d922e9527ba7326643209d0fbd3560d2cd1364ebce79e2e5,PodSandboxId:392767bf4f60686a97ecaf76c9e10b42221c8a9b3a32975a6e4a81610a720493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd059
3ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1702422682982445170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28fbf77328ae1537e9e604be434241ff,},Annotations:map[string]string{io.kubernetes.container.hash: 15a792a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683c50f3c552ed90f3b8e7541338c95c5780471206b87f45c7cc1aca45a41049,PodSandboxId:e5082426d931e042542334d8b0d64d4a25002cad2b0fcd2e1551b0def25ec2d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff
81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1702422682691046086,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8f6ab8047e991772e5e69594295c1a2aa24080909c6acdc36312ec66f4e04f,PodSandboxId:29723e3a42d249f6b4cc6027644ee023f9aad3099134c4ce7465968a5ec6fd05,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73
a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1702422682715811078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c8ec4239-4e59-4091-be45-6b621ec5993f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.054147011Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f974dbd6-cdf0-4300-8568-f068f3353cfb name=/runtime.v1.RuntimeService/Version
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.054233380Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f974dbd6-cdf0-4300-8568-f068f3353cfb name=/runtime.v1.RuntimeService/Version
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.056287469Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ba34962e-85dd-4cd6-a489-06f38c1e34d7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.056735777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702422935056720612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=ba34962e-85dd-4cd6-a489-06f38c1e34d7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.057301475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=807bda7a-706b-4a88-8c1c-e2b56691650e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.057352766Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=807bda7a-706b-4a88-8c1c-e2b56691650e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:15:35 ingress-addon-legacy-401709 crio[715]: time="2023-12-12 23:15:35.057581109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa8d6fdabbb2e9abbe283b66818a33c763bd3854eb4e8bfa060f9f1e92dbec6e,PodSandboxId:652552263cebfbf2b92759b23a042bbe0959cdd120e8bfa75999e6d1aa94e0f0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702422927526708134,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-5mpp7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1587c6a0-b9aa-4767-bea2-ad25996b57ed,},Annotations:map[string]string{io.kubernetes.container.hash: 19dbc10c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339cc74758c97aaec6b0e06637021052046905e5b9b39be5a012730a1c67f951,PodSandboxId:0b49d92a042da537ac85715c38cddab127d2931fced3fff559d28d482d28e750,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702422787135522411,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 84cf89ad-bc4e-4b24-a63d-6e00b0c634f7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 12436308,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1177cd5f98acd506c121e1857985699cadbe9429b01500ca5839633fb7937c3,PodSandboxId:1bfd3c7fe5571858f66b1fc78cc6cd154012256b936ce7f5fc8c6e65ccdc7882,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702422737870438319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: aa18af60-3887-47cb-84b1-6b005d7c3d33,},Annotations:map[string]string{io.kubernetes.container.hash: 45703f03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be18e38e9c74256797978e3fdb76175fbf3d7bafc4263a15d23926dc98c26cdc,PodSandboxId:c4aae657927224af592cd97dcdf2a53f9102ebe2d47e61d331a7ed239b9ededc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1702422708532917666,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sh8jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8159c498-b057-4d17-a93d-2c4043
34c1a5,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6f185d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d92255f8a61a8273ad4d9ff05b2f84c4ed2bb70de463aba2ce2557ee928e5d8,PodSandboxId:8268975b54204e24128154f1a1ec74a5c3c023a451bba625eeccbb332d8fded6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702422708188882071,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-v6xbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c72a33b4-030f-4328-9e98-5bac512a523a,},Annotations:map[string]string{io
.kubernetes.container.hash: 1db1fe65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e01bbb4a279930a8cd14e5618aedafdd824196e8127a7d77143bfa01d071cfa,PodSandboxId:9539f8d8657ed52861342090084c73eb55f7988bb762b08d4288770b0472bf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702422708129394051,Labels:map[string]string{io.kubernetes.container.name: coredns
,io.kubernetes.pod.name: coredns-66bff467f8-dx8qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a144f4-46da-4ed1-bf80-a9ad58810c4f,},Annotations:map[string]string{io.kubernetes.container.hash: 1db1fe65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eedd9d442bdd13ee69c58a2b67952c799db7c92b206b24975e8c92b192b9d6c4,PodSandboxId:1bfd3c7fe5571858f66b1fc78cc6cd154012256b936ce7f5fc8c6e65ccdc7882,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s
-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702422707664134931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa18af60-3887-47cb-84b1-6b005d7c3d33,},Annotations:map[string]string{io.kubernetes.container.hash: 45703f03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4520c46e577e942917f1550bda35379db52d7931ac526b27dac9908a477d97f,PodSandboxId:7c6ecdc40669685695d3f37ab7be60e5f706929a7ae57458436e51fbb3310386,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198
ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1702422684086185418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3daddf901be09cbb13bf2cf79c54d62,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6d7530,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0c62d5deb8245e9d922e9527ba7326643209d0fbd3560d2cd1364ebce79e2e5,PodSandboxId:392767bf4f60686a97ecaf76c9e10b42221c8a9b3a32975a6e4a81610a720493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd059
3ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1702422682982445170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28fbf77328ae1537e9e604be434241ff,},Annotations:map[string]string{io.kubernetes.container.hash: 15a792a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:683c50f3c552ed90f3b8e7541338c95c5780471206b87f45c7cc1aca45a41049,PodSandboxId:e5082426d931e042542334d8b0d64d4a25002cad2b0fcd2e1551b0def25ec2d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff
81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1702422682691046086,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8f6ab8047e991772e5e69594295c1a2aa24080909c6acdc36312ec66f4e04f,PodSandboxId:29723e3a42d249f6b4cc6027644ee023f9aad3099134c4ce7465968a5ec6fd05,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73
a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1702422682715811078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-401709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=807bda7a-706b-4a88-8c1c-e2b56691650e name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                     CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fa8d6fdabbb2e       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7   7 seconds ago       Running             hello-world-app           0                   652552263cebf       hello-world-app-5f5d8b66bb-5mpp7
	339cc74758c97       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc           2 minutes ago       Running             nginx                     0                   0b49d92a042da       nginx
	c1177cd5f98ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                          3 minutes ago       Running             storage-provisioner       1                   1bfd3c7fe5571       storage-provisioner
	be18e38e9c742       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                          3 minutes ago       Running             kube-proxy                0                   c4aae65792722       kube-proxy-sh8jc
	2d92255f8a61a       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                          3 minutes ago       Running             coredns                   0                   8268975b54204       coredns-66bff467f8-v6xbl
	3e01bbb4a2799       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                          3 minutes ago       Running             coredns                   0                   9539f8d8657ed       coredns-66bff467f8-dx8qr
	eedd9d442bdd1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                          3 minutes ago       Exited              storage-provisioner       0                   1bfd3c7fe5571       storage-provisioner
	d4520c46e577e       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                          4 minutes ago       Running             etcd                      0                   7c6ecdc406696       etcd-ingress-addon-legacy-401709
	d0c62d5deb824       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                          4 minutes ago       Running             kube-apiserver            0                   392767bf4f606       kube-apiserver-ingress-addon-legacy-401709
	ed8f6ab8047e9       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                          4 minutes ago       Running             kube-controller-manager   0                   29723e3a42d24       kube-controller-manager-ingress-addon-legacy-401709
	683c50f3c552e       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                          4 minutes ago       Running             kube-scheduler            0                   e5082426d931e       kube-scheduler-ingress-addon-legacy-401709
	
	* 
	* ==> coredns [2d92255f8a61a8273ad4d9ff05b2f84c4ed2bb70de463aba2ce2557ee928e5d8] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 10.244.0.6:32920 - 50969 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00044253s
	[INFO] 10.244.0.6:32920 - 18383 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000141622s
	[INFO] 10.244.0.6:32920 - 47125 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00014789s
	[INFO] 10.244.0.6:32920 - 35853 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000349102s
	[INFO] 10.244.0.6:32920 - 9024 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000146518s
	[INFO] 10.244.0.6:32920 - 14081 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000135475s
	[INFO] 10.244.0.6:32920 - 173 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000174981s
	[INFO] 10.244.0.6:58439 - 47942 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0001296s
	[INFO] 10.244.0.6:58439 - 27330 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000933739s
	[INFO] 10.244.0.6:58439 - 32612 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058242s
	[INFO] 10.244.0.6:58439 - 62922 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00012732s
	[INFO] 10.244.0.6:58439 - 41952 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000082318s
	[INFO] 10.244.0.6:58439 - 3943 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074633s
	[INFO] 10.244.0.6:58439 - 19756 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053319s
	I1212 23:12:18.401065       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-12-12 23:11:48.400265091 +0000 UTC m=+0.035798688) (total time: 30.000635776s):
	Trace[2019727887]: [30.000635776s] [30.000635776s] END
	I1212 23:12:18.401352       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-12-12 23:11:48.400766142 +0000 UTC m=+0.036299734) (total time: 30.000558939s):
	Trace[1427131847]: [30.000558939s] [30.000558939s] END
	E1212 23:12:18.401398       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1212 23:12:18.401839       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-12-12 23:11:48.400997503 +0000 UTC m=+0.036531091) (total time: 30.00082932s):
	Trace[939984059]: [30.00082932s] [30.00082932s] END
	E1212 23:12:18.402037       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:12:18.402111       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [3e01bbb4a279930a8cd14e5618aedafdd824196e8127a7d77143bfa01d071cfa] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 10.244.0.6:43016 - 14859 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00027541s
	[INFO] 10.244.0.6:43016 - 8747 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000224286s
	[INFO] 10.244.0.6:43016 - 38295 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000120553s
	[INFO] 10.244.0.6:43016 - 52209 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000148878s
	[INFO] 10.244.0.6:43016 - 59697 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000119368s
	[INFO] 10.244.0.6:43016 - 48720 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000089416s
	[INFO] 10.244.0.6:43016 - 22367 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000120675s
	[INFO] 10.244.0.6:41323 - 61844 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000071113s
	[INFO] 10.244.0.6:41323 - 27863 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000064958s
	[INFO] 10.244.0.6:41323 - 14112 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039025s
	[INFO] 10.244.0.6:41323 - 4196 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000025521s
	[INFO] 10.244.0.6:41323 - 16454 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035533s
	[INFO] 10.244.0.6:41323 - 23856 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000025761s
	[INFO] 10.244.0.6:41323 - 48598 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044889s
	I1212 23:12:18.368119       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-12-12 23:11:48.36733083 +0000 UTC m=+0.045604957) (total time: 30.000665863s):
	Trace[2019727887]: [30.000665863s] [30.000665863s] END
	E1212 23:12:18.368213       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1212 23:12:18.369844       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-12-12 23:11:48.369356496 +0000 UTC m=+0.047630629) (total time: 30.000472978s):
	Trace[1427131847]: [30.000472978s] [30.000472978s] END
	E1212 23:12:18.369897       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1212 23:12:18.370049       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-12-12 23:11:48.369644554 +0000 UTC m=+0.047918686) (total time: 30.000386308s):
	Trace[939984059]: [30.000386308s] [30.000386308s] END
	E1212 23:12:18.370079       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-401709
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-401709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=ingress-addon-legacy-401709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_11_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:11:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-401709
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:15:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:15:31 +0000   Tue, 12 Dec 2023 23:11:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:15:31 +0000   Tue, 12 Dec 2023 23:11:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:15:31 +0000   Tue, 12 Dec 2023 23:11:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:15:31 +0000   Tue, 12 Dec 2023 23:11:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ingress-addon-legacy-401709
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee9b5b27c4cc4aa89ced4c0d9fc15d93
	  System UUID:                ee9b5b27-c4cc-4aa8-9ced-4c0d9fc15d93
	  Boot ID:                    fa9acb81-c51f-4738-bfa3-34e4144a14c5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-5mpp7                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 coredns-66bff467f8-dx8qr                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m50s
	  kube-system                 coredns-66bff467f8-v6xbl                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m50s
	  kube-system                 etcd-ingress-addon-legacy-401709                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-apiserver-ingress-addon-legacy-401709             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-401709    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-proxy-sh8jc                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-scheduler-ingress-addon-legacy-401709             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             140Mi (3%!)(MISSING)  340Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m14s (x5 over 4m14s)  kubelet     Node ingress-addon-legacy-401709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x5 over 4m14s)  kubelet     Node ingress-addon-legacy-401709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x4 over 4m14s)  kubelet     Node ingress-addon-legacy-401709 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m4s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m4s                   kubelet     Node ingress-addon-legacy-401709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s                   kubelet     Node ingress-addon-legacy-401709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s                   kubelet     Node ingress-addon-legacy-401709 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m54s                  kubelet     Node ingress-addon-legacy-401709 status is now: NodeReady
	  Normal  Starting                 3m47s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec12 23:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093219] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.415350] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.522990] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148002] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Dec12 23:11] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.682741] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.106783] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.144193] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.109059] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.210216] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +8.176082] systemd-fstab-generator[1024]: Ignoring "noauto" for root device
	[  +3.278689] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.543477] systemd-fstab-generator[1431]: Ignoring "noauto" for root device
	[ +16.939999] kauditd_printk_skb: 6 callbacks suppressed
	[Dec12 23:12] kauditd_printk_skb: 16 callbacks suppressed
	[ +11.732620] kauditd_printk_skb: 4 callbacks suppressed
	[Dec12 23:13] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.504734] kauditd_printk_skb: 3 callbacks suppressed
	[Dec12 23:15] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [d4520c46e577e942917f1550bda35379db52d7931ac526b27dac9908a477d97f] <==
	* raft2023/12/12 23:11:24 INFO: 821abe7be15f44a3 became follower at term 1
	raft2023/12/12 23:11:24 INFO: 821abe7be15f44a3 switched to configuration voters=(9375015013596480675)
	2023-12-12 23:11:24.227289 W | auth: simple token is not cryptographically signed
	2023-12-12 23:11:24.233859 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-12 23:11:24.236609 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-12 23:11:24.236856 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-12 23:11:24.237119 I | etcdserver: 821abe7be15f44a3 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-12 23:11:24.237467 I | embed: listening for peers on 192.168.39.68:2380
	raft2023/12/12 23:11:24 INFO: 821abe7be15f44a3 switched to configuration voters=(9375015013596480675)
	2023-12-12 23:11:24.237780 I | etcdserver/membership: added member 821abe7be15f44a3 [https://192.168.39.68:2380] to cluster 68cd46418ae274f9
	raft2023/12/12 23:11:24 INFO: 821abe7be15f44a3 is starting a new election at term 1
	raft2023/12/12 23:11:24 INFO: 821abe7be15f44a3 became candidate at term 2
	raft2023/12/12 23:11:24 INFO: 821abe7be15f44a3 received MsgVoteResp from 821abe7be15f44a3 at term 2
	raft2023/12/12 23:11:24 INFO: 821abe7be15f44a3 became leader at term 2
	raft2023/12/12 23:11:24 INFO: raft.node: 821abe7be15f44a3 elected leader 821abe7be15f44a3 at term 2
	2023-12-12 23:11:24.420384 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-12 23:11:24.422056 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-12 23:11:24.422190 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-12 23:11:24.422311 I | etcdserver: published {Name:ingress-addon-legacy-401709 ClientURLs:[https://192.168.39.68:2379]} to cluster 68cd46418ae274f9
	2023-12-12 23:11:24.422734 I | embed: ready to serve client requests
	2023-12-12 23:11:24.426394 I | embed: serving client requests on 192.168.39.68:2379
	2023-12-12 23:11:24.426630 I | embed: ready to serve client requests
	2023-12-12 23:11:24.438401 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-12 23:11:46.921237 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (116.536207ms) to execute
	2023-12-12 23:11:46.922062 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-dx8qr\" " with result "range_response_count:1 size:4277" took too long (111.701452ms) to execute
	
	* 
	* ==> kernel <==
	*  23:15:35 up 4 min,  0 users,  load average: 0.82, 0.35, 0.14
	Linux ingress-addon-legacy-401709 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d0c62d5deb8245e9d922e9527ba7326643209d0fbd3560d2cd1364ebce79e2e5] <==
	* I1212 23:11:27.866120       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1212 23:11:27.916651       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1212 23:11:27.916734       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:11:27.920419       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 23:11:27.920480       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:11:27.947731       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1212 23:11:28.814717       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1212 23:11:28.814789       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1212 23:11:28.827401       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1212 23:11:28.834681       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1212 23:11:28.834735       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1212 23:11:29.342062       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:11:29.396170       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1212 23:11:29.560280       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.68]
	I1212 23:11:29.561221       1 controller.go:609] quota admission added evaluator for: endpoints
	I1212 23:11:29.565710       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 23:11:30.169408       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	E1212 23:11:31.032718       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	I1212 23:11:31.156317       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1212 23:11:31.231577       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1212 23:11:31.523589       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:11:45.866844       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1212 23:11:45.872065       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1212 23:12:27.992052       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1212 23:13:01.459062       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [ed8f6ab8047e991772e5e69594295c1a2aa24080909c6acdc36312ec66f4e04f] <==
	* I1212 23:11:45.940357       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"76999146-f553-40bf-9548-a49d8d96c7b5", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-dx8qr
	I1212 23:11:45.963565       1 shared_informer.go:230] Caches are synced for stateful set 
	I1212 23:11:45.975443       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"76999146-f553-40bf-9548-a49d8d96c7b5", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-v6xbl
	I1212 23:11:45.998883       1 shared_informer.go:230] Caches are synced for endpoint 
	I1212 23:11:46.021053       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1212 23:11:46.117504       1 shared_informer.go:230] Caches are synced for attach detach 
	I1212 23:11:46.169430       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1212 23:11:46.279843       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I1212 23:11:46.300920       1 shared_informer.go:230] Caches are synced for disruption 
	I1212 23:11:46.301105       1 disruption.go:339] Sending events to api server.
	I1212 23:11:46.353252       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1212 23:11:46.353334       1 shared_informer.go:230] Caches are synced for HPA 
	I1212 23:11:46.359671       1 shared_informer.go:230] Caches are synced for resource quota 
	I1212 23:11:46.390161       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1212 23:11:46.390240       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1212 23:11:46.398476       1 shared_informer.go:230] Caches are synced for resource quota 
	I1212 23:12:27.984148       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"bc08ccec-c828-4dee-96c6-da220bb8255e", APIVersion:"apps/v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1212 23:12:28.013357       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"32e5c4d5-a06e-434f-baf2-856911eff036", APIVersion:"apps/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-cjzzp
	I1212 23:12:28.063664       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"f7ec9db4-8f8f-48d9-829e-2771a2c6ec46", APIVersion:"batch/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-czgn8
	I1212 23:12:28.063739       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"9aa84b6e-8fb9-4342-bebd-08a612085b8a", APIVersion:"batch/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-6wn9g
	I1212 23:12:33.921722       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"f7ec9db4-8f8f-48d9-829e-2771a2c6ec46", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1212 23:12:35.927900       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"9aa84b6e-8fb9-4342-bebd-08a612085b8a", APIVersion:"batch/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1212 23:15:23.144406       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"b3a3f6e8-2308-44e2-afd8-6cd750a996d8", APIVersion:"apps/v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1212 23:15:23.172185       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"38959a53-2eee-49f3-805a-cfc9843ded3d", APIVersion:"apps/v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-5mpp7
	E1212 23:15:32.307854       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-269g2" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [be18e38e9c74256797978e3fdb76175fbf3d7bafc4263a15d23926dc98c26cdc] <==
	* W1212 23:11:48.714824       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1212 23:11:48.723610       1 node.go:136] Successfully retrieved node IP: 192.168.39.68
	I1212 23:11:48.723661       1 server_others.go:186] Using iptables Proxier.
	I1212 23:11:48.724219       1 server.go:583] Version: v1.18.20
	I1212 23:11:48.726895       1 config.go:315] Starting service config controller
	I1212 23:11:48.727092       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1212 23:11:48.728341       1 config.go:133] Starting endpoints config controller
	I1212 23:11:48.728377       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1212 23:11:48.830064       1 shared_informer.go:230] Caches are synced for service config 
	I1212 23:11:48.830085       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [683c50f3c552ed90f3b8e7541338c95c5780471206b87f45c7cc1aca45a41049] <==
	* I1212 23:11:27.923403       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1212 23:11:27.923491       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1212 23:11:27.932161       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1212 23:11:27.933108       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:11:27.933119       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:11:27.933162       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1212 23:11:27.935752       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:11:27.937350       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:11:27.937807       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:11:27.937901       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:11:27.938089       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:11:27.939065       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 23:11:27.939159       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 23:11:27.939210       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:11:27.939411       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 23:11:27.939577       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:11:27.940477       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:11:27.940480       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:11:28.811782       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:11:28.921221       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 23:11:28.981291       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 23:11:29.095170       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:11:29.116686       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:11:29.163636       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1212 23:11:29.433251       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:10:55 UTC, ends at Tue 2023-12-12 23:15:35 UTC. --
	Dec 12 23:12:45 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:12:45.321599    1437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-js4dj" (UniqueName: "kubernetes.io/secret/4a5c7d08-63e4-4f0a-8753-dea9c6e83b10-minikube-ingress-dns-token-js4dj") pod "kube-ingress-dns-minikube" (UID: "4a5c7d08-63e4-4f0a-8753-dea9c6e83b10")
	Dec 12 23:13:01 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:13:01.640779    1437 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 12 23:13:01 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:13:01.778696    1437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-slp2l" (UniqueName: "kubernetes.io/secret/84cf89ad-bc4e-4b24-a63d-6e00b0c634f7-default-token-slp2l") pod "nginx" (UID: "84cf89ad-bc4e-4b24-a63d-6e00b0c634f7")
	Dec 12 23:15:23 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:23.213385    1437 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 12 23:15:23 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:23.363534    1437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-slp2l" (UniqueName: "kubernetes.io/secret/1587c6a0-b9aa-4767-bea2-ad25996b57ed-default-token-slp2l") pod "hello-world-app-5f5d8b66bb-5mpp7" (UID: "1587c6a0-b9aa-4767-bea2-ad25996b57ed")
	Dec 12 23:15:25 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:25.167177    1437 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5b6db888a124baa4373c4f8edadfca02f910134ebbea9fcf5904d32a69b4f60d
	Dec 12 23:15:25 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:25.219246    1437 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5b6db888a124baa4373c4f8edadfca02f910134ebbea9fcf5904d32a69b4f60d
	Dec 12 23:15:25 ingress-addon-legacy-401709 kubelet[1437]: E1212 23:15:25.220136    1437 remote_runtime.go:295] ContainerStatus "5b6db888a124baa4373c4f8edadfca02f910134ebbea9fcf5904d32a69b4f60d" from runtime service failed: rpc error: code = NotFound desc = could not find container "5b6db888a124baa4373c4f8edadfca02f910134ebbea9fcf5904d32a69b4f60d": container with ID starting with 5b6db888a124baa4373c4f8edadfca02f910134ebbea9fcf5904d32a69b4f60d not found: ID does not exist
	Dec 12 23:15:25 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:25.273134    1437 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-js4dj" (UniqueName: "kubernetes.io/secret/4a5c7d08-63e4-4f0a-8753-dea9c6e83b10-minikube-ingress-dns-token-js4dj") pod "4a5c7d08-63e4-4f0a-8753-dea9c6e83b10" (UID: "4a5c7d08-63e4-4f0a-8753-dea9c6e83b10")
	Dec 12 23:15:25 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:25.287105    1437 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5c7d08-63e4-4f0a-8753-dea9c6e83b10-minikube-ingress-dns-token-js4dj" (OuterVolumeSpecName: "minikube-ingress-dns-token-js4dj") pod "4a5c7d08-63e4-4f0a-8753-dea9c6e83b10" (UID: "4a5c7d08-63e4-4f0a-8753-dea9c6e83b10"). InnerVolumeSpecName "minikube-ingress-dns-token-js4dj". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 23:15:25 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:25.373517    1437 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-js4dj" (UniqueName: "kubernetes.io/secret/4a5c7d08-63e4-4f0a-8753-dea9c6e83b10-minikube-ingress-dns-token-js4dj") on node "ingress-addon-legacy-401709" DevicePath ""
	Dec 12 23:15:25 ingress-addon-legacy-401709 kubelet[1437]: E1212 23:15:25.675470    1437 kubelet_pods.go:1235] Failed killing the pod "kube-ingress-dns-minikube": failed to "KillContainer" for "minikube-ingress-dns" with KillContainerError: "rpc error: code = NotFound desc = could not find container \"5b6db888a124baa4373c4f8edadfca02f910134ebbea9fcf5904d32a69b4f60d\": container with ID starting with 5b6db888a124baa4373c4f8edadfca02f910134ebbea9fcf5904d32a69b4f60d not found: ID does not exist"
	Dec 12 23:15:27 ingress-addon-legacy-401709 kubelet[1437]: E1212 23:15:27.640157    1437 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-cjzzp.17a038a203479944", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-cjzzp", UID:"eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8", APIVersion:"v1", ResourceVersion:"468", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-401709"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1565843e5f6c344, ext:236576418249, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1565843e5f6c344, ext:236576418249, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-cjzzp.17a038a203479944" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 12 23:15:27 ingress-addon-legacy-401709 kubelet[1437]: E1212 23:15:27.709082    1437 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-cjzzp.17a038a203479944", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-cjzzp", UID:"eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8", APIVersion:"v1", ResourceVersion:"468", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-401709"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1565843e5f6c344, ext:236576418249, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1565843e965a059, ext:236634015453, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-cjzzp.17a038a203479944" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 12 23:15:30 ingress-addon-legacy-401709 kubelet[1437]: W1212 23:15:30.195917    1437 pod_container_deletor.go:77] Container "7a37bdba9b17d14e2d81af437c70805c20fc9f69101a0d7c3b5c8faf2d3dd800" not found in pod's containers
	Dec 12 23:15:31 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:31.528336    1437 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c1e25f2d7abc47137bd9df5e72b6f2cfcfccd57726bf1ba5f641ebcd25a79959
	Dec 12 23:15:31 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:31.552035    1437 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e6e65ea63ae41dfed2b3094d7067db7d5e2c73cc4dd6aa3b685509b771ef9536
	Dec 12 23:15:31 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:31.576465    1437 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 6612cebf7e16957f33b5216023fa5812285f6d21544ea5edb52ef2b01513b576
	Dec 12 23:15:31 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:31.797175    1437 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8-webhook-cert") pod "eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8" (UID: "eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8")
	Dec 12 23:15:31 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:31.797266    1437 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-56snq" (UniqueName: "kubernetes.io/secret/eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8-ingress-nginx-token-56snq") pod "eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8" (UID: "eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8")
	Dec 12 23:15:31 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:31.802334    1437 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8-ingress-nginx-token-56snq" (OuterVolumeSpecName: "ingress-nginx-token-56snq") pod "eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8" (UID: "eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8"). InnerVolumeSpecName "ingress-nginx-token-56snq". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 23:15:31 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:31.802603    1437 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8" (UID: "eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 23:15:31 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:31.897614    1437 reconciler.go:319] Volume detached for volume "ingress-nginx-token-56snq" (UniqueName: "kubernetes.io/secret/eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8-ingress-nginx-token-56snq") on node "ingress-addon-legacy-401709" DevicePath ""
	Dec 12 23:15:31 ingress-addon-legacy-401709 kubelet[1437]: I1212 23:15:31.897643    1437 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8-webhook-cert") on node "ingress-addon-legacy-401709" DevicePath ""
	Dec 12 23:15:33 ingress-addon-legacy-401709 kubelet[1437]: W1212 23:15:33.680741    1437 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/eb0c3be9-46a7-4fb9-8ec8-d181acf34cc8/volumes" does not exist
	
	* 
	* ==> storage-provisioner [c1177cd5f98acd506c121e1857985699cadbe9429b01500ca5839633fb7937c3] <==
	* I1212 23:12:17.978822       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:12:17.987723       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:12:17.987782       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:12:18.005406       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:12:18.005918       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b60f28f4-a774-4af7-9ca0-faf0e355b34b", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-401709_679dc499-223c-46c7-bb17-4930ffa50190 became leader
	I1212 23:12:18.006062       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-401709_679dc499-223c-46c7-bb17-4930ffa50190!
	I1212 23:12:18.107136       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-401709_679dc499-223c-46c7-bb17-4930ffa50190!
	
	* 
	* ==> storage-provisioner [eedd9d442bdd13ee69c58a2b67952c799db7c92b206b24975e8c92b192b9d6c4] <==
	* I1212 23:11:47.773783       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 23:12:17.776226       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-401709 -n ingress-addon-legacy-401709
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-401709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (170.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- exec busybox-5bc68d56bd-4vnmj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- exec busybox-5bc68d56bd-4vnmj -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-510563 -- exec busybox-5bc68d56bd-4vnmj -- sh -c "ping -c 1 192.168.39.1": exit status 1 (204.050389ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-4vnmj): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- exec busybox-5bc68d56bd-6hjc6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- exec busybox-5bc68d56bd-6hjc6 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-510563 -- exec busybox-5bc68d56bd-6hjc6 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (196.95945ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-6hjc6): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-510563 -n multinode-510563
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-510563 logs -n 25: (1.411005688s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-275829 ssh -- ls                    | mount-start-2-275829 | jenkins | v1.32.0 | 12 Dec 23 23:19 UTC | 12 Dec 23 23:19 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-275829 ssh --                       | mount-start-2-275829 | jenkins | v1.32.0 | 12 Dec 23 23:19 UTC | 12 Dec 23 23:19 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-275829                           | mount-start-2-275829 | jenkins | v1.32.0 | 12 Dec 23 23:19 UTC | 12 Dec 23 23:19 UTC |
	| start   | -p mount-start-2-275829                           | mount-start-2-275829 | jenkins | v1.32.0 | 12 Dec 23 23:19 UTC | 12 Dec 23 23:19 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-275829 | jenkins | v1.32.0 | 12 Dec 23 23:19 UTC |                     |
	|         | --profile mount-start-2-275829                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-275829 ssh -- ls                    | mount-start-2-275829 | jenkins | v1.32.0 | 12 Dec 23 23:19 UTC | 12 Dec 23 23:19 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-275829 ssh --                       | mount-start-2-275829 | jenkins | v1.32.0 | 12 Dec 23 23:19 UTC | 12 Dec 23 23:19 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-275829                           | mount-start-2-275829 | jenkins | v1.32.0 | 12 Dec 23 23:19 UTC | 12 Dec 23 23:19 UTC |
	| delete  | -p mount-start-1-260411                           | mount-start-1-260411 | jenkins | v1.32.0 | 12 Dec 23 23:19 UTC | 12 Dec 23 23:19 UTC |
	| start   | -p multinode-510563                               | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:19 UTC | 12 Dec 23 23:21 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- apply -f                   | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC | 12 Dec 23 23:21 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- rollout                    | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC | 12 Dec 23 23:21 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- get pods -o                | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC | 12 Dec 23 23:21 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- get pods -o                | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC | 12 Dec 23 23:21 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- exec                       | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC | 12 Dec 23 23:21 UTC |
	|         | busybox-5bc68d56bd-4vnmj --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- exec                       | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC | 12 Dec 23 23:21 UTC |
	|         | busybox-5bc68d56bd-6hjc6 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- exec                       | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC | 12 Dec 23 23:21 UTC |
	|         | busybox-5bc68d56bd-4vnmj --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- exec                       | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC | 12 Dec 23 23:21 UTC |
	|         | busybox-5bc68d56bd-6hjc6 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- exec                       | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC | 12 Dec 23 23:21 UTC |
	|         | busybox-5bc68d56bd-4vnmj -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- exec                       | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC | 12 Dec 23 23:21 UTC |
	|         | busybox-5bc68d56bd-6hjc6 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- get pods -o                | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC | 12 Dec 23 23:21 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- exec                       | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC | 12 Dec 23 23:21 UTC |
	|         | busybox-5bc68d56bd-4vnmj                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- exec                       | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC |                     |
	|         | busybox-5bc68d56bd-4vnmj -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- exec                       | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC | 12 Dec 23 23:21 UTC |
	|         | busybox-5bc68d56bd-6hjc6                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-510563 -- exec                       | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:21 UTC |                     |
	|         | busybox-5bc68d56bd-6hjc6 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:19:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:19:49.611032  156765 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:19:49.611277  156765 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:19:49.611286  156765 out.go:309] Setting ErrFile to fd 2...
	I1212 23:19:49.611290  156765 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:19:49.611480  156765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1212 23:19:49.612041  156765 out.go:303] Setting JSON to false
	I1212 23:19:49.612979  156765 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7338,"bootTime":1702415852,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:19:49.613040  156765 start.go:138] virtualization: kvm guest
	I1212 23:19:49.615418  156765 out.go:177] * [multinode-510563] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:19:49.616985  156765 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 23:19:49.616982  156765 notify.go:220] Checking for updates...
	I1212 23:19:49.618525  156765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:19:49.619828  156765 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:19:49.621340  156765 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:19:49.622753  156765 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:19:49.624078  156765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:19:49.625655  156765 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:19:49.661230  156765 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 23:19:49.662705  156765 start.go:298] selected driver: kvm2
	I1212 23:19:49.662726  156765 start.go:902] validating driver "kvm2" against <nil>
	I1212 23:19:49.662742  156765 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:19:49.663511  156765 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:19:49.663601  156765 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:19:49.678512  156765 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:19:49.678588  156765 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 23:19:49.678814  156765 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:19:49.678864  156765 cni.go:84] Creating CNI manager for ""
	I1212 23:19:49.678871  156765 cni.go:136] 0 nodes found, recommending kindnet
	I1212 23:19:49.678897  156765 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 23:19:49.678913  156765 start_flags.go:323] config:
	{Name:multinode-510563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:19:49.679034  156765 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:19:49.680804  156765 out.go:177] * Starting control plane node multinode-510563 in cluster multinode-510563
	I1212 23:19:49.682266  156765 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:19:49.682313  156765 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 23:19:49.682325  156765 cache.go:56] Caching tarball of preloaded images
	I1212 23:19:49.682419  156765 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:19:49.682430  156765 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 23:19:49.682727  156765 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/config.json ...
	I1212 23:19:49.682748  156765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/config.json: {Name:mk1a3c6b8196c8c0c110dae92dc955ea27e0e53e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:19:49.682869  156765 start.go:365] acquiring machines lock for multinode-510563: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:19:49.682910  156765 start.go:369] acquired machines lock for "multinode-510563" in 15.295µs
	I1212 23:19:49.682926  156765 start.go:93] Provisioning new machine with config: &{Name:multinode-510563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:19:49.682997  156765 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 23:19:49.684743  156765 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:19:49.684901  156765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:19:49.684941  156765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:19:49.699174  156765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40961
	I1212 23:19:49.699610  156765 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:19:49.700202  156765 main.go:141] libmachine: Using API Version  1
	I1212 23:19:49.700229  156765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:19:49.700572  156765 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:19:49.700810  156765 main.go:141] libmachine: (multinode-510563) Calling .GetMachineName
	I1212 23:19:49.701038  156765 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:19:49.701300  156765 start.go:159] libmachine.API.Create for "multinode-510563" (driver="kvm2")
	I1212 23:19:49.701341  156765 client.go:168] LocalClient.Create starting
	I1212 23:19:49.701375  156765 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem
	I1212 23:19:49.701419  156765 main.go:141] libmachine: Decoding PEM data...
	I1212 23:19:49.701453  156765 main.go:141] libmachine: Parsing certificate...
	I1212 23:19:49.701515  156765 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem
	I1212 23:19:49.701542  156765 main.go:141] libmachine: Decoding PEM data...
	I1212 23:19:49.701560  156765 main.go:141] libmachine: Parsing certificate...
	I1212 23:19:49.701584  156765 main.go:141] libmachine: Running pre-create checks...
	I1212 23:19:49.701595  156765 main.go:141] libmachine: (multinode-510563) Calling .PreCreateCheck
	I1212 23:19:49.701953  156765 main.go:141] libmachine: (multinode-510563) Calling .GetConfigRaw
	I1212 23:19:49.702345  156765 main.go:141] libmachine: Creating machine...
	I1212 23:19:49.702360  156765 main.go:141] libmachine: (multinode-510563) Calling .Create
	I1212 23:19:49.702487  156765 main.go:141] libmachine: (multinode-510563) Creating KVM machine...
	I1212 23:19:49.703811  156765 main.go:141] libmachine: (multinode-510563) DBG | found existing default KVM network
	I1212 23:19:49.704696  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:49.704520  156788 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147900}
	I1212 23:19:49.710175  156765 main.go:141] libmachine: (multinode-510563) DBG | trying to create private KVM network mk-multinode-510563 192.168.39.0/24...
	I1212 23:19:49.785873  156765 main.go:141] libmachine: (multinode-510563) DBG | private KVM network mk-multinode-510563 192.168.39.0/24 created
	I1212 23:19:49.785906  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:49.785836  156788 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:19:49.785922  156765 main.go:141] libmachine: (multinode-510563) Setting up store path in /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563 ...
	I1212 23:19:49.785953  156765 main.go:141] libmachine: (multinode-510563) Building disk image from file:///home/jenkins/minikube-integration/17777-136241/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 23:19:49.785970  156765 main.go:141] libmachine: (multinode-510563) Downloading /home/jenkins/minikube-integration/17777-136241/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17777-136241/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 23:19:49.999303  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:49.999183  156788 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa...
	I1212 23:19:50.354456  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:50.354279  156788 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/multinode-510563.rawdisk...
	I1212 23:19:50.354494  156765 main.go:141] libmachine: (multinode-510563) DBG | Writing magic tar header
	I1212 23:19:50.354516  156765 main.go:141] libmachine: (multinode-510563) DBG | Writing SSH key tar header
	I1212 23:19:50.354530  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:50.354453  156788 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563 ...
	I1212 23:19:50.354629  156765 main.go:141] libmachine: (multinode-510563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563
	I1212 23:19:50.354664  156765 main.go:141] libmachine: (multinode-510563) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563 (perms=drwx------)
	I1212 23:19:50.354678  156765 main.go:141] libmachine: (multinode-510563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube/machines
	I1212 23:19:50.354703  156765 main.go:141] libmachine: (multinode-510563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:19:50.354718  156765 main.go:141] libmachine: (multinode-510563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241
	I1212 23:19:50.354732  156765 main.go:141] libmachine: (multinode-510563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 23:19:50.354739  156765 main.go:141] libmachine: (multinode-510563) DBG | Checking permissions on dir: /home/jenkins
	I1212 23:19:50.354747  156765 main.go:141] libmachine: (multinode-510563) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube/machines (perms=drwxr-xr-x)
	I1212 23:19:50.354757  156765 main.go:141] libmachine: (multinode-510563) DBG | Checking permissions on dir: /home
	I1212 23:19:50.354770  156765 main.go:141] libmachine: (multinode-510563) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube (perms=drwxr-xr-x)
	I1212 23:19:50.354785  156765 main.go:141] libmachine: (multinode-510563) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241 (perms=drwxrwxr-x)
	I1212 23:19:50.354800  156765 main.go:141] libmachine: (multinode-510563) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 23:19:50.354812  156765 main.go:141] libmachine: (multinode-510563) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 23:19:50.354824  156765 main.go:141] libmachine: (multinode-510563) Creating domain...
	I1212 23:19:50.354831  156765 main.go:141] libmachine: (multinode-510563) DBG | Skipping /home - not owner
	I1212 23:19:50.355790  156765 main.go:141] libmachine: (multinode-510563) define libvirt domain using xml: 
	I1212 23:19:50.355822  156765 main.go:141] libmachine: (multinode-510563) <domain type='kvm'>
	I1212 23:19:50.355833  156765 main.go:141] libmachine: (multinode-510563)   <name>multinode-510563</name>
	I1212 23:19:50.355847  156765 main.go:141] libmachine: (multinode-510563)   <memory unit='MiB'>2200</memory>
	I1212 23:19:50.355858  156765 main.go:141] libmachine: (multinode-510563)   <vcpu>2</vcpu>
	I1212 23:19:50.355869  156765 main.go:141] libmachine: (multinode-510563)   <features>
	I1212 23:19:50.355883  156765 main.go:141] libmachine: (multinode-510563)     <acpi/>
	I1212 23:19:50.355895  156765 main.go:141] libmachine: (multinode-510563)     <apic/>
	I1212 23:19:50.355913  156765 main.go:141] libmachine: (multinode-510563)     <pae/>
	I1212 23:19:50.355925  156765 main.go:141] libmachine: (multinode-510563)     
	I1212 23:19:50.355961  156765 main.go:141] libmachine: (multinode-510563)   </features>
	I1212 23:19:50.355981  156765 main.go:141] libmachine: (multinode-510563)   <cpu mode='host-passthrough'>
	I1212 23:19:50.355992  156765 main.go:141] libmachine: (multinode-510563)   
	I1212 23:19:50.356003  156765 main.go:141] libmachine: (multinode-510563)   </cpu>
	I1212 23:19:50.356021  156765 main.go:141] libmachine: (multinode-510563)   <os>
	I1212 23:19:50.356053  156765 main.go:141] libmachine: (multinode-510563)     <type>hvm</type>
	I1212 23:19:50.356067  156765 main.go:141] libmachine: (multinode-510563)     <boot dev='cdrom'/>
	I1212 23:19:50.356078  156765 main.go:141] libmachine: (multinode-510563)     <boot dev='hd'/>
	I1212 23:19:50.356091  156765 main.go:141] libmachine: (multinode-510563)     <bootmenu enable='no'/>
	I1212 23:19:50.356103  156765 main.go:141] libmachine: (multinode-510563)   </os>
	I1212 23:19:50.356121  156765 main.go:141] libmachine: (multinode-510563)   <devices>
	I1212 23:19:50.356132  156765 main.go:141] libmachine: (multinode-510563)     <disk type='file' device='cdrom'>
	I1212 23:19:50.356150  156765 main.go:141] libmachine: (multinode-510563)       <source file='/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/boot2docker.iso'/>
	I1212 23:19:50.356163  156765 main.go:141] libmachine: (multinode-510563)       <target dev='hdc' bus='scsi'/>
	I1212 23:19:50.356176  156765 main.go:141] libmachine: (multinode-510563)       <readonly/>
	I1212 23:19:50.356192  156765 main.go:141] libmachine: (multinode-510563)     </disk>
	I1212 23:19:50.356206  156765 main.go:141] libmachine: (multinode-510563)     <disk type='file' device='disk'>
	I1212 23:19:50.356216  156765 main.go:141] libmachine: (multinode-510563)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 23:19:50.356233  156765 main.go:141] libmachine: (multinode-510563)       <source file='/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/multinode-510563.rawdisk'/>
	I1212 23:19:50.356246  156765 main.go:141] libmachine: (multinode-510563)       <target dev='hda' bus='virtio'/>
	I1212 23:19:50.356258  156765 main.go:141] libmachine: (multinode-510563)     </disk>
	I1212 23:19:50.356274  156765 main.go:141] libmachine: (multinode-510563)     <interface type='network'>
	I1212 23:19:50.356288  156765 main.go:141] libmachine: (multinode-510563)       <source network='mk-multinode-510563'/>
	I1212 23:19:50.356300  156765 main.go:141] libmachine: (multinode-510563)       <model type='virtio'/>
	I1212 23:19:50.356311  156765 main.go:141] libmachine: (multinode-510563)     </interface>
	I1212 23:19:50.356323  156765 main.go:141] libmachine: (multinode-510563)     <interface type='network'>
	I1212 23:19:50.356337  156765 main.go:141] libmachine: (multinode-510563)       <source network='default'/>
	I1212 23:19:50.356353  156765 main.go:141] libmachine: (multinode-510563)       <model type='virtio'/>
	I1212 23:19:50.356369  156765 main.go:141] libmachine: (multinode-510563)     </interface>
	I1212 23:19:50.356377  156765 main.go:141] libmachine: (multinode-510563)     <serial type='pty'>
	I1212 23:19:50.356391  156765 main.go:141] libmachine: (multinode-510563)       <target port='0'/>
	I1212 23:19:50.356398  156765 main.go:141] libmachine: (multinode-510563)     </serial>
	I1212 23:19:50.356410  156765 main.go:141] libmachine: (multinode-510563)     <console type='pty'>
	I1212 23:19:50.356427  156765 main.go:141] libmachine: (multinode-510563)       <target type='serial' port='0'/>
	I1212 23:19:50.356473  156765 main.go:141] libmachine: (multinode-510563)     </console>
	I1212 23:19:50.356494  156765 main.go:141] libmachine: (multinode-510563)     <rng model='virtio'>
	I1212 23:19:50.356512  156765 main.go:141] libmachine: (multinode-510563)       <backend model='random'>/dev/random</backend>
	I1212 23:19:50.356525  156765 main.go:141] libmachine: (multinode-510563)     </rng>
	I1212 23:19:50.356539  156765 main.go:141] libmachine: (multinode-510563)     
	I1212 23:19:50.356561  156765 main.go:141] libmachine: (multinode-510563)     
	I1212 23:19:50.356576  156765 main.go:141] libmachine: (multinode-510563)   </devices>
	I1212 23:19:50.356588  156765 main.go:141] libmachine: (multinode-510563) </domain>
	I1212 23:19:50.356611  156765 main.go:141] libmachine: (multinode-510563) 
	I1212 23:19:50.361026  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:f0:d0:86 in network default
	I1212 23:19:50.361642  156765 main.go:141] libmachine: (multinode-510563) Ensuring networks are active...
	I1212 23:19:50.361665  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:19:50.362431  156765 main.go:141] libmachine: (multinode-510563) Ensuring network default is active
	I1212 23:19:50.362725  156765 main.go:141] libmachine: (multinode-510563) Ensuring network mk-multinode-510563 is active
	I1212 23:19:50.363302  156765 main.go:141] libmachine: (multinode-510563) Getting domain xml...
	I1212 23:19:50.364076  156765 main.go:141] libmachine: (multinode-510563) Creating domain...
	I1212 23:19:51.589429  156765 main.go:141] libmachine: (multinode-510563) Waiting to get IP...
	I1212 23:19:51.590249  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:19:51.590693  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:19:51.590717  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:51.590632  156788 retry.go:31] will retry after 220.041726ms: waiting for machine to come up
	I1212 23:19:51.811972  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:19:51.812343  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:19:51.812386  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:51.812298  156788 retry.go:31] will retry after 311.765342ms: waiting for machine to come up
	I1212 23:19:52.125739  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:19:52.126173  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:19:52.126205  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:52.126121  156788 retry.go:31] will retry after 297.016169ms: waiting for machine to come up
	I1212 23:19:52.424502  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:19:52.424925  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:19:52.424959  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:52.424863  156788 retry.go:31] will retry after 432.401736ms: waiting for machine to come up
	I1212 23:19:52.858531  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:19:52.858974  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:19:52.858998  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:52.858933  156788 retry.go:31] will retry after 706.411363ms: waiting for machine to come up
	I1212 23:19:53.566871  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:19:53.567268  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:19:53.567298  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:53.567220  156788 retry.go:31] will retry after 734.204037ms: waiting for machine to come up
	I1212 23:19:54.303123  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:19:54.303534  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:19:54.303561  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:54.303481  156788 retry.go:31] will retry after 810.073414ms: waiting for machine to come up
	I1212 23:19:55.115040  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:19:55.115508  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:19:55.115531  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:55.115458  156788 retry.go:31] will retry after 1.438156321s: waiting for machine to come up
	I1212 23:19:56.554979  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:19:56.555419  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:19:56.555442  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:56.555383  156788 retry.go:31] will retry after 1.686461169s: waiting for machine to come up
	I1212 23:19:58.242864  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:19:58.243180  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:19:58.243210  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:19:58.243131  156788 retry.go:31] will retry after 2.051875097s: waiting for machine to come up
	I1212 23:20:00.296257  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:00.296727  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:20:00.296758  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:20:00.296662  156788 retry.go:31] will retry after 2.822897332s: waiting for machine to come up
	I1212 23:20:03.120714  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:03.121124  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:20:03.121143  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:20:03.121082  156788 retry.go:31] will retry after 3.075865629s: waiting for machine to come up
	I1212 23:20:06.198889  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:06.199322  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:20:06.199351  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:20:06.199251  156788 retry.go:31] will retry after 3.895141295s: waiting for machine to come up
	I1212 23:20:10.096264  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:10.096799  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:20:10.096822  156765 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:20:10.096744  156788 retry.go:31] will retry after 4.95214538s: waiting for machine to come up
	I1212 23:20:15.050308  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.050762  156765 main.go:141] libmachine: (multinode-510563) Found IP for machine: 192.168.39.38
	I1212 23:20:15.050792  156765 main.go:141] libmachine: (multinode-510563) Reserving static IP address...
	I1212 23:20:15.050807  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has current primary IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.051149  156765 main.go:141] libmachine: (multinode-510563) DBG | unable to find host DHCP lease matching {name: "multinode-510563", mac: "52:54:00:2d:9f:26", ip: "192.168.39.38"} in network mk-multinode-510563
	I1212 23:20:15.123505  156765 main.go:141] libmachine: (multinode-510563) DBG | Getting to WaitForSSH function...
	I1212 23:20:15.123566  156765 main.go:141] libmachine: (multinode-510563) Reserved static IP address: 192.168.39.38
	I1212 23:20:15.123586  156765 main.go:141] libmachine: (multinode-510563) Waiting for SSH to be available...
	I1212 23:20:15.125869  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.126211  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:15.126246  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.126343  156765 main.go:141] libmachine: (multinode-510563) DBG | Using SSH client type: external
	I1212 23:20:15.126382  156765 main.go:141] libmachine: (multinode-510563) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa (-rw-------)
	I1212 23:20:15.126434  156765 main.go:141] libmachine: (multinode-510563) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:20:15.126561  156765 main.go:141] libmachine: (multinode-510563) DBG | About to run SSH command:
	I1212 23:20:15.126586  156765 main.go:141] libmachine: (multinode-510563) DBG | exit 0
	I1212 23:20:15.216154  156765 main.go:141] libmachine: (multinode-510563) DBG | SSH cmd err, output: <nil>: 
	I1212 23:20:15.216474  156765 main.go:141] libmachine: (multinode-510563) KVM machine creation complete!
	I1212 23:20:15.216827  156765 main.go:141] libmachine: (multinode-510563) Calling .GetConfigRaw
	I1212 23:20:15.217305  156765 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:20:15.217503  156765 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:20:15.217658  156765 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 23:20:15.217674  156765 main.go:141] libmachine: (multinode-510563) Calling .GetState
	I1212 23:20:15.218883  156765 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 23:20:15.218898  156765 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 23:20:15.218905  156765 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 23:20:15.218911  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:20:15.221080  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.221488  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:15.221509  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.221647  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:20:15.221823  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:15.221947  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:15.222056  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:20:15.222175  156765 main.go:141] libmachine: Using SSH client type: native
	I1212 23:20:15.222489  156765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1212 23:20:15.222500  156765 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 23:20:15.339815  156765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:20:15.339834  156765 main.go:141] libmachine: Detecting the provisioner...
	I1212 23:20:15.339842  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:20:15.342657  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.343016  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:15.343048  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.343204  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:20:15.343421  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:15.343571  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:15.343699  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:20:15.343833  156765 main.go:141] libmachine: Using SSH client type: native
	I1212 23:20:15.344161  156765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1212 23:20:15.344175  156765 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 23:20:15.461772  156765 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 23:20:15.461846  156765 main.go:141] libmachine: found compatible host: buildroot
	I1212 23:20:15.461861  156765 main.go:141] libmachine: Provisioning with buildroot...
	I1212 23:20:15.461877  156765 main.go:141] libmachine: (multinode-510563) Calling .GetMachineName
	I1212 23:20:15.462122  156765 buildroot.go:166] provisioning hostname "multinode-510563"
	I1212 23:20:15.462149  156765 main.go:141] libmachine: (multinode-510563) Calling .GetMachineName
	I1212 23:20:15.462321  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:20:15.464916  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.465326  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:15.465354  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.465521  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:20:15.465698  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:15.465934  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:15.466114  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:20:15.466341  156765 main.go:141] libmachine: Using SSH client type: native
	I1212 23:20:15.466709  156765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1212 23:20:15.466725  156765 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-510563 && echo "multinode-510563" | sudo tee /etc/hostname
	I1212 23:20:15.596787  156765 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-510563
	
	I1212 23:20:15.596813  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:20:15.599271  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.599666  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:15.599699  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.599892  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:20:15.600102  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:15.600291  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:15.600484  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:20:15.600676  156765 main.go:141] libmachine: Using SSH client type: native
	I1212 23:20:15.600993  156765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1212 23:20:15.601011  156765 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-510563' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-510563/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-510563' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:20:15.724733  156765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:20:15.724765  156765 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1212 23:20:15.724803  156765 buildroot.go:174] setting up certificates
	I1212 23:20:15.724824  156765 provision.go:83] configureAuth start
	I1212 23:20:15.724847  156765 main.go:141] libmachine: (multinode-510563) Calling .GetMachineName
	I1212 23:20:15.725201  156765 main.go:141] libmachine: (multinode-510563) Calling .GetIP
	I1212 23:20:15.728188  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.728567  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:15.728596  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.728747  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:20:15.730958  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.731351  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:15.731385  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.731571  156765 provision.go:138] copyHostCerts
	I1212 23:20:15.731608  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:20:15.731661  156765 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1212 23:20:15.731684  156765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:20:15.731748  156765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1212 23:20:15.731855  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:20:15.731886  156765 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1212 23:20:15.731894  156765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:20:15.731934  156765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1212 23:20:15.732000  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:20:15.732023  156765 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1212 23:20:15.732038  156765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:20:15.732066  156765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1212 23:20:15.732123  156765 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.multinode-510563 san=[192.168.39.38 192.168.39.38 localhost 127.0.0.1 minikube multinode-510563]
	I1212 23:20:15.901079  156765 provision.go:172] copyRemoteCerts
	I1212 23:20:15.901164  156765 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:20:15.901196  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:20:15.903953  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.904318  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:15.904349  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:15.904599  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:20:15.904809  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:15.905000  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:20:15.905149  156765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:20:15.996109  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 23:20:15.996189  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:20:16.021082  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 23:20:16.021166  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 23:20:16.044773  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 23:20:16.044842  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:20:16.068144  156765 provision.go:86] duration metric: configureAuth took 343.302971ms
	I1212 23:20:16.068175  156765 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:20:16.068394  156765 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:20:16.068496  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:20:16.071089  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.071369  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:16.071396  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.071622  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:20:16.071816  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:16.072017  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:16.072171  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:20:16.072306  156765 main.go:141] libmachine: Using SSH client type: native
	I1212 23:20:16.072670  156765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1212 23:20:16.072688  156765 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:20:16.413915  156765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:20:16.413983  156765 main.go:141] libmachine: Checking connection to Docker...
	I1212 23:20:16.413997  156765 main.go:141] libmachine: (multinode-510563) Calling .GetURL
	I1212 23:20:16.415339  156765 main.go:141] libmachine: (multinode-510563) DBG | Using libvirt version 6000000
	I1212 23:20:16.417676  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.418002  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:16.418031  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.418171  156765 main.go:141] libmachine: Docker is up and running!
	I1212 23:20:16.418182  156765 main.go:141] libmachine: Reticulating splines...
	I1212 23:20:16.418196  156765 client.go:171] LocalClient.Create took 26.71683987s
	I1212 23:20:16.418237  156765 start.go:167] duration metric: libmachine.API.Create for "multinode-510563" took 26.716927919s
	I1212 23:20:16.418249  156765 start.go:300] post-start starting for "multinode-510563" (driver="kvm2")
	I1212 23:20:16.418261  156765 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:20:16.418281  156765 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:20:16.418573  156765 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:20:16.418605  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:20:16.420661  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.420998  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:16.421019  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.421095  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:20:16.421283  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:16.421427  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:20:16.421557  156765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:20:16.509977  156765 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:20:16.514208  156765 command_runner.go:130] > NAME=Buildroot
	I1212 23:20:16.514232  156765 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 23:20:16.514236  156765 command_runner.go:130] > ID=buildroot
	I1212 23:20:16.514240  156765 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:20:16.514245  156765 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:20:16.514427  156765 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:20:16.514452  156765 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1212 23:20:16.514538  156765 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1212 23:20:16.514647  156765 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1212 23:20:16.514664  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> /etc/ssl/certs/1435412.pem
	I1212 23:20:16.514782  156765 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:20:16.522938  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:20:16.545447  156765 start.go:303] post-start completed in 127.179535ms
	I1212 23:20:16.545507  156765 main.go:141] libmachine: (multinode-510563) Calling .GetConfigRaw
	I1212 23:20:16.546168  156765 main.go:141] libmachine: (multinode-510563) Calling .GetIP
	I1212 23:20:16.548771  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.549112  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:16.549135  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.549501  156765 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/config.json ...
	I1212 23:20:16.549707  156765 start.go:128] duration metric: createHost completed in 26.866699223s
	I1212 23:20:16.549745  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:20:16.552051  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.552370  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:16.552414  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.552516  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:20:16.552782  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:16.552944  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:16.553069  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:20:16.553259  156765 main.go:141] libmachine: Using SSH client type: native
	I1212 23:20:16.553570  156765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1212 23:20:16.553581  156765 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:20:16.668959  156765 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423216.655274809
	
	I1212 23:20:16.668987  156765 fix.go:206] guest clock: 1702423216.655274809
	I1212 23:20:16.668997  156765 fix.go:219] Guest: 2023-12-12 23:20:16.655274809 +0000 UTC Remote: 2023-12-12 23:20:16.549721306 +0000 UTC m=+26.987090109 (delta=105.553503ms)
	I1212 23:20:16.669039  156765 fix.go:190] guest clock delta is within tolerance: 105.553503ms
	I1212 23:20:16.669051  156765 start.go:83] releasing machines lock for "multinode-510563", held for 26.986131184s
	I1212 23:20:16.669084  156765 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:20:16.669336  156765 main.go:141] libmachine: (multinode-510563) Calling .GetIP
	I1212 23:20:16.671548  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.671892  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:16.671930  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.672037  156765 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:20:16.672470  156765 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:20:16.672650  156765 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:20:16.672731  156765 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:20:16.672770  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:20:16.672823  156765 ssh_runner.go:195] Run: cat /version.json
	I1212 23:20:16.672842  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:20:16.675592  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.675621  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.675954  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:16.675981  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.676009  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:16.676026  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:16.676114  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:20:16.676258  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:20:16.676279  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:16.676413  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:16.676448  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:20:16.676578  156765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:20:16.676592  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:20:16.676710  156765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:20:16.756847  156765 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1212 23:20:16.757367  156765 ssh_runner.go:195] Run: systemctl --version
	I1212 23:20:16.781661  156765 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:20:16.782580  156765 command_runner.go:130] > systemd 247 (247)
	I1212 23:20:16.782618  156765 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 23:20:16.782676  156765 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:20:16.938752  156765 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:20:16.945141  156765 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 23:20:16.945423  156765 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:20:16.945506  156765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:20:16.960560  156765 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:20:16.960793  156765 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:20:16.960813  156765 start.go:475] detecting cgroup driver to use...
	I1212 23:20:16.960889  156765 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:20:16.976053  156765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:20:16.987712  156765 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:20:16.987762  156765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:20:16.999522  156765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:20:17.011713  156765 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:20:17.116193  156765 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1212 23:20:17.116290  156765 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:20:17.229318  156765 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1212 23:20:17.229408  156765 docker.go:219] disabling docker service ...
	I1212 23:20:17.229483  156765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:20:17.242476  156765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:20:17.253759  156765 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1212 23:20:17.253932  156765 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:20:17.266738  156765 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1212 23:20:17.364340  156765 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:20:17.376954  156765 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1212 23:20:17.377420  156765 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1212 23:20:17.470533  156765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:20:17.483412  156765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:20:17.500041  156765 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 23:20:17.500423  156765 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:20:17.500507  156765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:20:17.510236  156765 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:20:17.510290  156765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:20:17.520019  156765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:20:17.529608  156765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:20:17.538999  156765 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:20:17.548994  156765 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:20:17.557617  156765 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:20:17.557659  156765 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:20:17.557695  156765 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:20:17.570761  156765 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:20:17.579746  156765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:20:17.688693  156765 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:20:17.850428  156765 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:20:17.850500  156765 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:20:17.855699  156765 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 23:20:17.855723  156765 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 23:20:17.855730  156765 command_runner.go:130] > Device: 16h/22d	Inode: 794         Links: 1
	I1212 23:20:17.855737  156765 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:20:17.855746  156765 command_runner.go:130] > Access: 2023-12-12 23:20:17.822826132 +0000
	I1212 23:20:17.855758  156765 command_runner.go:130] > Modify: 2023-12-12 23:20:17.822826132 +0000
	I1212 23:20:17.855770  156765 command_runner.go:130] > Change: 2023-12-12 23:20:17.822826132 +0000
	I1212 23:20:17.855776  156765 command_runner.go:130] >  Birth: -
	I1212 23:20:17.855824  156765 start.go:543] Will wait 60s for crictl version
	I1212 23:20:17.855875  156765 ssh_runner.go:195] Run: which crictl
	I1212 23:20:17.859520  156765 command_runner.go:130] > /usr/bin/crictl
	I1212 23:20:17.859808  156765 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:20:17.901551  156765 command_runner.go:130] > Version:  0.1.0
	I1212 23:20:17.901576  156765 command_runner.go:130] > RuntimeName:  cri-o
	I1212 23:20:17.901583  156765 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 23:20:17.901588  156765 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 23:20:17.901607  156765 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:20:17.901673  156765 ssh_runner.go:195] Run: crio --version
	I1212 23:20:17.947629  156765 command_runner.go:130] > crio version 1.24.1
	I1212 23:20:17.947654  156765 command_runner.go:130] > Version:          1.24.1
	I1212 23:20:17.947663  156765 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 23:20:17.947682  156765 command_runner.go:130] > GitTreeState:     dirty
	I1212 23:20:17.947692  156765 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 23:20:17.947700  156765 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 23:20:17.947707  156765 command_runner.go:130] > Compiler:         gc
	I1212 23:20:17.947714  156765 command_runner.go:130] > Platform:         linux/amd64
	I1212 23:20:17.947723  156765 command_runner.go:130] > Linkmode:         dynamic
	I1212 23:20:17.947730  156765 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 23:20:17.947738  156765 command_runner.go:130] > SeccompEnabled:   true
	I1212 23:20:17.947742  156765 command_runner.go:130] > AppArmorEnabled:  false
	I1212 23:20:17.948678  156765 ssh_runner.go:195] Run: crio --version
	I1212 23:20:17.999628  156765 command_runner.go:130] > crio version 1.24.1
	I1212 23:20:17.999648  156765 command_runner.go:130] > Version:          1.24.1
	I1212 23:20:17.999655  156765 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 23:20:17.999659  156765 command_runner.go:130] > GitTreeState:     dirty
	I1212 23:20:17.999671  156765 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 23:20:17.999678  156765 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 23:20:17.999686  156765 command_runner.go:130] > Compiler:         gc
	I1212 23:20:17.999694  156765 command_runner.go:130] > Platform:         linux/amd64
	I1212 23:20:17.999703  156765 command_runner.go:130] > Linkmode:         dynamic
	I1212 23:20:17.999728  156765 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 23:20:17.999736  156765 command_runner.go:130] > SeccompEnabled:   true
	I1212 23:20:17.999740  156765 command_runner.go:130] > AppArmorEnabled:  false
	I1212 23:20:18.002321  156765 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:20:18.003525  156765 main.go:141] libmachine: (multinode-510563) Calling .GetIP
	I1212 23:20:18.006379  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:18.006745  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:18.006771  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:18.007058  156765 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 23:20:18.011283  156765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:20:18.025780  156765 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:20:18.025841  156765 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:20:18.068182  156765 command_runner.go:130] > {
	I1212 23:20:18.068208  156765 command_runner.go:130] >   "images": [
	I1212 23:20:18.068212  156765 command_runner.go:130] >   ]
	I1212 23:20:18.068216  156765 command_runner.go:130] > }
	I1212 23:20:18.068370  156765 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 23:20:18.068459  156765 ssh_runner.go:195] Run: which lz4
	I1212 23:20:18.072496  156765 command_runner.go:130] > /usr/bin/lz4
	I1212 23:20:18.072526  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 23:20:18.072615  156765 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:20:18.076541  156765 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:20:18.076586  156765 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:20:18.076603  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 23:20:19.906409  156765 crio.go:444] Took 1.833829 seconds to copy over tarball
	I1212 23:20:19.906490  156765 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:20:22.930915  156765 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.024400655s)
	I1212 23:20:22.930941  156765 crio.go:451] Took 3.024508 seconds to extract the tarball
	I1212 23:20:22.930950  156765 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:20:22.971781  156765 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:20:23.043235  156765 command_runner.go:130] > {
	I1212 23:20:23.043255  156765 command_runner.go:130] >   "images": [
	I1212 23:20:23.043259  156765 command_runner.go:130] >     {
	I1212 23:20:23.043266  156765 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1212 23:20:23.043271  156765 command_runner.go:130] >       "repoTags": [
	I1212 23:20:23.043287  156765 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1212 23:20:23.043294  156765 command_runner.go:130] >       ],
	I1212 23:20:23.043299  156765 command_runner.go:130] >       "repoDigests": [
	I1212 23:20:23.043313  156765 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1212 23:20:23.043326  156765 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1212 23:20:23.043333  156765 command_runner.go:130] >       ],
	I1212 23:20:23.043339  156765 command_runner.go:130] >       "size": "65258016",
	I1212 23:20:23.043344  156765 command_runner.go:130] >       "uid": null,
	I1212 23:20:23.043348  156765 command_runner.go:130] >       "username": "",
	I1212 23:20:23.043356  156765 command_runner.go:130] >       "spec": null,
	I1212 23:20:23.043364  156765 command_runner.go:130] >       "pinned": false
	I1212 23:20:23.043367  156765 command_runner.go:130] >     },
	I1212 23:20:23.043371  156765 command_runner.go:130] >     {
	I1212 23:20:23.043383  156765 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1212 23:20:23.043391  156765 command_runner.go:130] >       "repoTags": [
	I1212 23:20:23.043401  156765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 23:20:23.043418  156765 command_runner.go:130] >       ],
	I1212 23:20:23.043429  156765 command_runner.go:130] >       "repoDigests": [
	I1212 23:20:23.043444  156765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1212 23:20:23.043491  156765 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1212 23:20:23.043533  156765 command_runner.go:130] >       ],
	I1212 23:20:23.043557  156765 command_runner.go:130] >       "size": "31470524",
	I1212 23:20:23.043568  156765 command_runner.go:130] >       "uid": null,
	I1212 23:20:23.043576  156765 command_runner.go:130] >       "username": "",
	I1212 23:20:23.043585  156765 command_runner.go:130] >       "spec": null,
	I1212 23:20:23.043593  156765 command_runner.go:130] >       "pinned": false
	I1212 23:20:23.043602  156765 command_runner.go:130] >     },
	I1212 23:20:23.043608  156765 command_runner.go:130] >     {
	I1212 23:20:23.043620  156765 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1212 23:20:23.043630  156765 command_runner.go:130] >       "repoTags": [
	I1212 23:20:23.043645  156765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1212 23:20:23.043651  156765 command_runner.go:130] >       ],
	I1212 23:20:23.043662  156765 command_runner.go:130] >       "repoDigests": [
	I1212 23:20:23.043674  156765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1212 23:20:23.043690  156765 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1212 23:20:23.043699  156765 command_runner.go:130] >       ],
	I1212 23:20:23.043711  156765 command_runner.go:130] >       "size": "53621675",
	I1212 23:20:23.043719  156765 command_runner.go:130] >       "uid": null,
	I1212 23:20:23.043725  156765 command_runner.go:130] >       "username": "",
	I1212 23:20:23.043734  156765 command_runner.go:130] >       "spec": null,
	I1212 23:20:23.043741  156765 command_runner.go:130] >       "pinned": false
	I1212 23:20:23.043751  156765 command_runner.go:130] >     },
	I1212 23:20:23.043758  156765 command_runner.go:130] >     {
	I1212 23:20:23.043774  156765 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1212 23:20:23.043785  156765 command_runner.go:130] >       "repoTags": [
	I1212 23:20:23.043794  156765 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1212 23:20:23.043801  156765 command_runner.go:130] >       ],
	I1212 23:20:23.043806  156765 command_runner.go:130] >       "repoDigests": [
	I1212 23:20:23.043821  156765 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1212 23:20:23.043836  156765 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1212 23:20:23.043854  156765 command_runner.go:130] >       ],
	I1212 23:20:23.043864  156765 command_runner.go:130] >       "size": "295456551",
	I1212 23:20:23.043871  156765 command_runner.go:130] >       "uid": {
	I1212 23:20:23.043881  156765 command_runner.go:130] >         "value": "0"
	I1212 23:20:23.043891  156765 command_runner.go:130] >       },
	I1212 23:20:23.043900  156765 command_runner.go:130] >       "username": "",
	I1212 23:20:23.043905  156765 command_runner.go:130] >       "spec": null,
	I1212 23:20:23.043913  156765 command_runner.go:130] >       "pinned": false
	I1212 23:20:23.043919  156765 command_runner.go:130] >     },
	I1212 23:20:23.043928  156765 command_runner.go:130] >     {
	I1212 23:20:23.043938  156765 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1212 23:20:23.043957  156765 command_runner.go:130] >       "repoTags": [
	I1212 23:20:23.043969  156765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1212 23:20:23.043978  156765 command_runner.go:130] >       ],
	I1212 23:20:23.043985  156765 command_runner.go:130] >       "repoDigests": [
	I1212 23:20:23.044000  156765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1212 23:20:23.044014  156765 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1212 23:20:23.044024  156765 command_runner.go:130] >       ],
	I1212 23:20:23.044032  156765 command_runner.go:130] >       "size": "127226832",
	I1212 23:20:23.044042  156765 command_runner.go:130] >       "uid": {
	I1212 23:20:23.044051  156765 command_runner.go:130] >         "value": "0"
	I1212 23:20:23.044058  156765 command_runner.go:130] >       },
	I1212 23:20:23.044069  156765 command_runner.go:130] >       "username": "",
	I1212 23:20:23.044079  156765 command_runner.go:130] >       "spec": null,
	I1212 23:20:23.044086  156765 command_runner.go:130] >       "pinned": false
	I1212 23:20:23.044092  156765 command_runner.go:130] >     },
	I1212 23:20:23.044097  156765 command_runner.go:130] >     {
	I1212 23:20:23.044111  156765 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1212 23:20:23.044126  156765 command_runner.go:130] >       "repoTags": [
	I1212 23:20:23.044139  156765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1212 23:20:23.044146  156765 command_runner.go:130] >       ],
	I1212 23:20:23.044153  156765 command_runner.go:130] >       "repoDigests": [
	I1212 23:20:23.044166  156765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1212 23:20:23.044181  156765 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1212 23:20:23.044190  156765 command_runner.go:130] >       ],
	I1212 23:20:23.044198  156765 command_runner.go:130] >       "size": "123261750",
	I1212 23:20:23.044208  156765 command_runner.go:130] >       "uid": {
	I1212 23:20:23.044215  156765 command_runner.go:130] >         "value": "0"
	I1212 23:20:23.044224  156765 command_runner.go:130] >       },
	I1212 23:20:23.044231  156765 command_runner.go:130] >       "username": "",
	I1212 23:20:23.044243  156765 command_runner.go:130] >       "spec": null,
	I1212 23:20:23.044251  156765 command_runner.go:130] >       "pinned": false
	I1212 23:20:23.044260  156765 command_runner.go:130] >     },
	I1212 23:20:23.044266  156765 command_runner.go:130] >     {
	I1212 23:20:23.044281  156765 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1212 23:20:23.044291  156765 command_runner.go:130] >       "repoTags": [
	I1212 23:20:23.044302  156765 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1212 23:20:23.044311  156765 command_runner.go:130] >       ],
	I1212 23:20:23.044322  156765 command_runner.go:130] >       "repoDigests": [
	I1212 23:20:23.044336  156765 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1212 23:20:23.044347  156765 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1212 23:20:23.044356  156765 command_runner.go:130] >       ],
	I1212 23:20:23.044364  156765 command_runner.go:130] >       "size": "74749335",
	I1212 23:20:23.044373  156765 command_runner.go:130] >       "uid": null,
	I1212 23:20:23.044381  156765 command_runner.go:130] >       "username": "",
	I1212 23:20:23.044391  156765 command_runner.go:130] >       "spec": null,
	I1212 23:20:23.044398  156765 command_runner.go:130] >       "pinned": false
	I1212 23:20:23.044407  156765 command_runner.go:130] >     },
	I1212 23:20:23.044419  156765 command_runner.go:130] >     {
	I1212 23:20:23.044445  156765 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1212 23:20:23.044457  156765 command_runner.go:130] >       "repoTags": [
	I1212 23:20:23.044468  156765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1212 23:20:23.044477  156765 command_runner.go:130] >       ],
	I1212 23:20:23.044484  156765 command_runner.go:130] >       "repoDigests": [
	I1212 23:20:23.044520  156765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1212 23:20:23.044535  156765 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1212 23:20:23.044545  156765 command_runner.go:130] >       ],
	I1212 23:20:23.044553  156765 command_runner.go:130] >       "size": "61551410",
	I1212 23:20:23.044563  156765 command_runner.go:130] >       "uid": {
	I1212 23:20:23.044570  156765 command_runner.go:130] >         "value": "0"
	I1212 23:20:23.044579  156765 command_runner.go:130] >       },
	I1212 23:20:23.044586  156765 command_runner.go:130] >       "username": "",
	I1212 23:20:23.044595  156765 command_runner.go:130] >       "spec": null,
	I1212 23:20:23.044603  156765 command_runner.go:130] >       "pinned": false
	I1212 23:20:23.044607  156765 command_runner.go:130] >     },
	I1212 23:20:23.044610  156765 command_runner.go:130] >     {
	I1212 23:20:23.044622  156765 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1212 23:20:23.044633  156765 command_runner.go:130] >       "repoTags": [
	I1212 23:20:23.044641  156765 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1212 23:20:23.044650  156765 command_runner.go:130] >       ],
	I1212 23:20:23.044658  156765 command_runner.go:130] >       "repoDigests": [
	I1212 23:20:23.044672  156765 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1212 23:20:23.044687  156765 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1212 23:20:23.044694  156765 command_runner.go:130] >       ],
	I1212 23:20:23.044699  156765 command_runner.go:130] >       "size": "750414",
	I1212 23:20:23.044708  156765 command_runner.go:130] >       "uid": {
	I1212 23:20:23.044717  156765 command_runner.go:130] >         "value": "65535"
	I1212 23:20:23.044727  156765 command_runner.go:130] >       },
	I1212 23:20:23.044734  156765 command_runner.go:130] >       "username": "",
	I1212 23:20:23.044744  156765 command_runner.go:130] >       "spec": null,
	I1212 23:20:23.044751  156765 command_runner.go:130] >       "pinned": false
	I1212 23:20:23.044760  156765 command_runner.go:130] >     }
	I1212 23:20:23.044766  156765 command_runner.go:130] >   ]
	I1212 23:20:23.044772  156765 command_runner.go:130] > }
	I1212 23:20:23.044927  156765 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:20:23.044939  156765 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:20:23.044999  156765 ssh_runner.go:195] Run: crio config
	I1212 23:20:23.095824  156765 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 23:20:23.095854  156765 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 23:20:23.095864  156765 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 23:20:23.095869  156765 command_runner.go:130] > #
	I1212 23:20:23.095879  156765 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 23:20:23.095888  156765 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 23:20:23.095898  156765 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 23:20:23.095933  156765 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 23:20:23.095944  156765 command_runner.go:130] > # reload'.
	I1212 23:20:23.095950  156765 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 23:20:23.095956  156765 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 23:20:23.095963  156765 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 23:20:23.095969  156765 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 23:20:23.095975  156765 command_runner.go:130] > [crio]
	I1212 23:20:23.095982  156765 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 23:20:23.095989  156765 command_runner.go:130] > # containers images, in this directory.
	I1212 23:20:23.095993  156765 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 23:20:23.096005  156765 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 23:20:23.096013  156765 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 23:20:23.096019  156765 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 23:20:23.096039  156765 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 23:20:23.096046  156765 command_runner.go:130] > storage_driver = "overlay"
	I1212 23:20:23.096054  156765 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 23:20:23.096061  156765 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 23:20:23.096068  156765 command_runner.go:130] > storage_option = [
	I1212 23:20:23.096106  156765 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 23:20:23.096115  156765 command_runner.go:130] > ]
	I1212 23:20:23.096125  156765 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 23:20:23.096142  156765 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 23:20:23.096153  156765 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 23:20:23.096160  156765 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 23:20:23.096167  156765 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 23:20:23.096175  156765 command_runner.go:130] > # always happen on a node reboot
	I1212 23:20:23.096182  156765 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 23:20:23.096188  156765 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 23:20:23.096194  156765 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 23:20:23.096209  156765 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 23:20:23.096222  156765 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 23:20:23.096238  156765 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 23:20:23.096254  156765 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 23:20:23.096264  156765 command_runner.go:130] > # internal_wipe = true
	I1212 23:20:23.096271  156765 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 23:20:23.096282  156765 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 23:20:23.096293  156765 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 23:20:23.096306  156765 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 23:20:23.096317  156765 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 23:20:23.096327  156765 command_runner.go:130] > [crio.api]
	I1212 23:20:23.096336  156765 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 23:20:23.096364  156765 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 23:20:23.096372  156765 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 23:20:23.096383  156765 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 23:20:23.096398  156765 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 23:20:23.096409  156765 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 23:20:23.096420  156765 command_runner.go:130] > # stream_port = "0"
	I1212 23:20:23.096437  156765 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 23:20:23.096447  156765 command_runner.go:130] > # stream_enable_tls = false
	I1212 23:20:23.096455  156765 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 23:20:23.096464  156765 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 23:20:23.096475  156765 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 23:20:23.096489  156765 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 23:20:23.096498  156765 command_runner.go:130] > # minutes.
	I1212 23:20:23.096511  156765 command_runner.go:130] > # stream_tls_cert = ""
	I1212 23:20:23.096524  156765 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 23:20:23.096535  156765 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 23:20:23.096545  156765 command_runner.go:130] > # stream_tls_key = ""
	I1212 23:20:23.096555  156765 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 23:20:23.096568  156765 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 23:20:23.096580  156765 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 23:20:23.096594  156765 command_runner.go:130] > # stream_tls_ca = ""
	I1212 23:20:23.096608  156765 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 23:20:23.096620  156765 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 23:20:23.096635  156765 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 23:20:23.096646  156765 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 23:20:23.096663  156765 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 23:20:23.096675  156765 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 23:20:23.096686  156765 command_runner.go:130] > [crio.runtime]
	I1212 23:20:23.096696  156765 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 23:20:23.096708  156765 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 23:20:23.096718  156765 command_runner.go:130] > # "nofile=1024:2048"
	I1212 23:20:23.096728  156765 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 23:20:23.096739  156765 command_runner.go:130] > # default_ulimits = [
	I1212 23:20:23.096792  156765 command_runner.go:130] > # ]
	I1212 23:20:23.096807  156765 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 23:20:23.096813  156765 command_runner.go:130] > # no_pivot = false
	I1212 23:20:23.096823  156765 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 23:20:23.096837  156765 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 23:20:23.096846  156765 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 23:20:23.096860  156765 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 23:20:23.096872  156765 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 23:20:23.096892  156765 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 23:20:23.096904  156765 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 23:20:23.096919  156765 command_runner.go:130] > # Cgroup setting for conmon
	I1212 23:20:23.096932  156765 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 23:20:23.096942  156765 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 23:20:23.096953  156765 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 23:20:23.096965  156765 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 23:20:23.096979  156765 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 23:20:23.096989  156765 command_runner.go:130] > conmon_env = [
	I1212 23:20:23.097000  156765 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 23:20:23.097009  156765 command_runner.go:130] > ]
	I1212 23:20:23.097020  156765 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 23:20:23.097034  156765 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 23:20:23.097044  156765 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 23:20:23.097051  156765 command_runner.go:130] > # default_env = [
	I1212 23:20:23.097079  156765 command_runner.go:130] > # ]
	I1212 23:20:23.097098  156765 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 23:20:23.097105  156765 command_runner.go:130] > # selinux = false
	I1212 23:20:23.097119  156765 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 23:20:23.097132  156765 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 23:20:23.097141  156765 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 23:20:23.097148  156765 command_runner.go:130] > # seccomp_profile = ""
	I1212 23:20:23.097160  156765 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 23:20:23.097171  156765 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 23:20:23.097185  156765 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 23:20:23.097193  156765 command_runner.go:130] > # which might increase security.
	I1212 23:20:23.097205  156765 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 23:20:23.097217  156765 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 23:20:23.097230  156765 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 23:20:23.097239  156765 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 23:20:23.097250  156765 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 23:20:23.097262  156765 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:20:23.097271  156765 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 23:20:23.097286  156765 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 23:20:23.097297  156765 command_runner.go:130] > # the cgroup blockio controller.
	I1212 23:20:23.097308  156765 command_runner.go:130] > # blockio_config_file = ""
	I1212 23:20:23.097346  156765 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 23:20:23.097357  156765 command_runner.go:130] > # irqbalance daemon.
	I1212 23:20:23.097369  156765 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 23:20:23.097383  156765 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 23:20:23.097395  156765 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:20:23.097405  156765 command_runner.go:130] > # rdt_config_file = ""
	I1212 23:20:23.097417  156765 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 23:20:23.097426  156765 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 23:20:23.097440  156765 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 23:20:23.097458  156765 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 23:20:23.097472  156765 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 23:20:23.097486  156765 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 23:20:23.097496  156765 command_runner.go:130] > # will be added.
	I1212 23:20:23.097506  156765 command_runner.go:130] > # default_capabilities = [
	I1212 23:20:23.097514  156765 command_runner.go:130] > # 	"CHOWN",
	I1212 23:20:23.097526  156765 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 23:20:23.097536  156765 command_runner.go:130] > # 	"FSETID",
	I1212 23:20:23.097547  156765 command_runner.go:130] > # 	"FOWNER",
	I1212 23:20:23.097556  156765 command_runner.go:130] > # 	"SETGID",
	I1212 23:20:23.097563  156765 command_runner.go:130] > # 	"SETUID",
	I1212 23:20:23.097573  156765 command_runner.go:130] > # 	"SETPCAP",
	I1212 23:20:23.097583  156765 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 23:20:23.097592  156765 command_runner.go:130] > # 	"KILL",
	I1212 23:20:23.097600  156765 command_runner.go:130] > # ]
	I1212 23:20:23.097607  156765 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 23:20:23.097619  156765 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 23:20:23.097630  156765 command_runner.go:130] > # default_sysctls = [
	I1212 23:20:23.097640  156765 command_runner.go:130] > # ]
	I1212 23:20:23.097655  156765 command_runner.go:130] > # List of devices on the host that a
	I1212 23:20:23.097668  156765 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 23:20:23.097678  156765 command_runner.go:130] > # allowed_devices = [
	I1212 23:20:23.097687  156765 command_runner.go:130] > # 	"/dev/fuse",
	I1212 23:20:23.097693  156765 command_runner.go:130] > # ]
	I1212 23:20:23.097703  156765 command_runner.go:130] > # List of additional devices. specified as
	I1212 23:20:23.097719  156765 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 23:20:23.097738  156765 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 23:20:23.097778  156765 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 23:20:23.097787  156765 command_runner.go:130] > # additional_devices = [
	I1212 23:20:23.097797  156765 command_runner.go:130] > # ]
	I1212 23:20:23.097809  156765 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 23:20:23.097819  156765 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 23:20:23.097909  156765 command_runner.go:130] > # 	"/etc/cdi",
	I1212 23:20:23.097929  156765 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 23:20:23.097938  156765 command_runner.go:130] > # ]
	I1212 23:20:23.097952  156765 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 23:20:23.097974  156765 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 23:20:23.097983  156765 command_runner.go:130] > # Defaults to false.
	I1212 23:20:23.097994  156765 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 23:20:23.098009  156765 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 23:20:23.098022  156765 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 23:20:23.098032  156765 command_runner.go:130] > # hooks_dir = [
	I1212 23:20:23.098047  156765 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 23:20:23.098056  156765 command_runner.go:130] > # ]
	I1212 23:20:23.098066  156765 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 23:20:23.098077  156765 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 23:20:23.098090  156765 command_runner.go:130] > # its default mounts from the following two files:
	I1212 23:20:23.098100  156765 command_runner.go:130] > #
	I1212 23:20:23.098113  156765 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 23:20:23.098127  156765 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 23:20:23.098139  156765 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 23:20:23.098147  156765 command_runner.go:130] > #
	I1212 23:20:23.098156  156765 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 23:20:23.098169  156765 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 23:20:23.098184  156765 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 23:20:23.098196  156765 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 23:20:23.098205  156765 command_runner.go:130] > #
	I1212 23:20:23.098215  156765 command_runner.go:130] > # default_mounts_file = ""
	I1212 23:20:23.098227  156765 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 23:20:23.098241  156765 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 23:20:23.098254  156765 command_runner.go:130] > pids_limit = 1024
	I1212 23:20:23.098268  156765 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 23:20:23.098281  156765 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 23:20:23.098295  156765 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 23:20:23.098311  156765 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 23:20:23.098320  156765 command_runner.go:130] > # log_size_max = -1
	I1212 23:20:23.098330  156765 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 23:20:23.098337  156765 command_runner.go:130] > # log_to_journald = false
	I1212 23:20:23.098351  156765 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 23:20:23.098400  156765 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 23:20:23.098411  156765 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 23:20:23.098417  156765 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 23:20:23.098429  156765 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 23:20:23.098442  156765 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 23:20:23.098455  156765 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 23:20:23.098465  156765 command_runner.go:130] > # read_only = false
	I1212 23:20:23.098481  156765 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 23:20:23.098494  156765 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 23:20:23.098505  156765 command_runner.go:130] > # live configuration reload.
	I1212 23:20:23.098515  156765 command_runner.go:130] > # log_level = "info"
	I1212 23:20:23.098529  156765 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 23:20:23.098541  156765 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:20:23.098550  156765 command_runner.go:130] > # log_filter = ""
	I1212 23:20:23.098563  156765 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 23:20:23.098576  156765 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 23:20:23.098585  156765 command_runner.go:130] > # separated by comma.
	I1212 23:20:23.098589  156765 command_runner.go:130] > # uid_mappings = ""
	I1212 23:20:23.098601  156765 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 23:20:23.098616  156765 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 23:20:23.098626  156765 command_runner.go:130] > # separated by comma.
	I1212 23:20:23.098635  156765 command_runner.go:130] > # gid_mappings = ""
	I1212 23:20:23.098648  156765 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 23:20:23.098661  156765 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 23:20:23.098672  156765 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 23:20:23.098679  156765 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 23:20:23.098689  156765 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 23:20:23.098706  156765 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 23:20:23.098719  156765 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 23:20:23.098730  156765 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 23:20:23.098743  156765 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 23:20:23.098755  156765 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 23:20:23.098764  156765 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 23:20:23.098773  156765 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 23:20:23.098787  156765 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 23:20:23.098800  156765 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 23:20:23.098811  156765 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 23:20:23.098822  156765 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 23:20:23.098833  156765 command_runner.go:130] > drop_infra_ctr = false
	I1212 23:20:23.098845  156765 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 23:20:23.098853  156765 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 23:20:23.098868  156765 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 23:20:23.098879  156765 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 23:20:23.098911  156765 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 23:20:23.098923  156765 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 23:20:23.098935  156765 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 23:20:23.098949  156765 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 23:20:23.098960  156765 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 23:20:23.098974  156765 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 23:20:23.098987  156765 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 23:20:23.099001  156765 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 23:20:23.099011  156765 command_runner.go:130] > # default_runtime = "runc"
	I1212 23:20:23.099018  156765 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 23:20:23.099030  156765 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 23:20:23.099049  156765 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 23:20:23.099061  156765 command_runner.go:130] > # creation as a file is not desired either.
	I1212 23:20:23.099077  156765 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 23:20:23.099089  156765 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 23:20:23.099113  156765 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 23:20:23.099122  156765 command_runner.go:130] > # ]
	I1212 23:20:23.099136  156765 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 23:20:23.099150  156765 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 23:20:23.099164  156765 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 23:20:23.099180  156765 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 23:20:23.099188  156765 command_runner.go:130] > #
	I1212 23:20:23.099194  156765 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 23:20:23.099204  156765 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 23:20:23.099214  156765 command_runner.go:130] > #  runtime_type = "oci"
	I1212 23:20:23.099226  156765 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 23:20:23.099238  156765 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 23:20:23.099248  156765 command_runner.go:130] > #  allowed_annotations = []
	I1212 23:20:23.099257  156765 command_runner.go:130] > # Where:
	I1212 23:20:23.099268  156765 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 23:20:23.099279  156765 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 23:20:23.099310  156765 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 23:20:23.099324  156765 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 23:20:23.099334  156765 command_runner.go:130] > #   in $PATH.
	I1212 23:20:23.099347  156765 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 23:20:23.099359  156765 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 23:20:23.099368  156765 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 23:20:23.099374  156765 command_runner.go:130] > #   state.
	I1212 23:20:23.099392  156765 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 23:20:23.099406  156765 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 23:20:23.099420  156765 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 23:20:23.099432  156765 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 23:20:23.099446  156765 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 23:20:23.099458  156765 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 23:20:23.099463  156765 command_runner.go:130] > #   The currently recognized values are:
	I1212 23:20:23.099477  156765 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 23:20:23.099499  156765 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 23:20:23.099512  156765 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 23:20:23.099525  156765 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 23:20:23.099539  156765 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 23:20:23.099549  156765 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 23:20:23.099562  156765 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 23:20:23.099580  156765 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 23:20:23.099593  156765 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 23:20:23.099603  156765 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 23:20:23.099613  156765 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 23:20:23.099625  156765 command_runner.go:130] > runtime_type = "oci"
	I1212 23:20:23.099633  156765 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 23:20:23.099638  156765 command_runner.go:130] > runtime_config_path = ""
	I1212 23:20:23.099648  156765 command_runner.go:130] > monitor_path = ""
	I1212 23:20:23.099655  156765 command_runner.go:130] > monitor_cgroup = ""
	I1212 23:20:23.099666  156765 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 23:20:23.099679  156765 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 23:20:23.099689  156765 command_runner.go:130] > # running containers
	I1212 23:20:23.099699  156765 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 23:20:23.099711  156765 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 23:20:23.099770  156765 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 23:20:23.099788  156765 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 23:20:23.099797  156765 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 23:20:23.099805  156765 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 23:20:23.099817  156765 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 23:20:23.099827  156765 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 23:20:23.099838  156765 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 23:20:23.099849  156765 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 23:20:23.099865  156765 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 23:20:23.099879  156765 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 23:20:23.099897  156765 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 23:20:23.099913  156765 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 23:20:23.099928  156765 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 23:20:23.099941  156765 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 23:20:23.099955  156765 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 23:20:23.099971  156765 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 23:20:23.099987  156765 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 23:20:23.099999  156765 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 23:20:23.100009  156765 command_runner.go:130] > # Example:
	I1212 23:20:23.100018  156765 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 23:20:23.100029  156765 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 23:20:23.100040  156765 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 23:20:23.100052  156765 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 23:20:23.100061  156765 command_runner.go:130] > # cpuset = 0
	I1212 23:20:23.100065  156765 command_runner.go:130] > # cpushares = "0-1"
	I1212 23:20:23.100074  156765 command_runner.go:130] > # Where:
	I1212 23:20:23.100088  156765 command_runner.go:130] > # The workload name is workload-type.
	I1212 23:20:23.100103  156765 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 23:20:23.100116  156765 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 23:20:23.100128  156765 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 23:20:23.100144  156765 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 23:20:23.100157  156765 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 23:20:23.100166  156765 command_runner.go:130] > # 
	I1212 23:20:23.100176  156765 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 23:20:23.100188  156765 command_runner.go:130] > #
	I1212 23:20:23.100201  156765 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 23:20:23.100215  156765 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 23:20:23.100228  156765 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 23:20:23.100242  156765 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 23:20:23.100254  156765 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 23:20:23.100263  156765 command_runner.go:130] > [crio.image]
	I1212 23:20:23.100273  156765 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 23:20:23.100283  156765 command_runner.go:130] > # default_transport = "docker://"
	I1212 23:20:23.100297  156765 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 23:20:23.100314  156765 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 23:20:23.100324  156765 command_runner.go:130] > # global_auth_file = ""
	I1212 23:20:23.100336  156765 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 23:20:23.100347  156765 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:20:23.100357  156765 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 23:20:23.100367  156765 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 23:20:23.100379  156765 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 23:20:23.100389  156765 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:20:23.100396  156765 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 23:20:23.100410  156765 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 23:20:23.100419  156765 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 23:20:23.100442  156765 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 23:20:23.100453  156765 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 23:20:23.100461  156765 command_runner.go:130] > # pause_command = "/pause"
	I1212 23:20:23.100474  156765 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 23:20:23.100487  156765 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 23:20:23.100501  156765 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 23:20:23.100511  156765 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 23:20:23.100525  156765 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 23:20:23.100536  156765 command_runner.go:130] > # signature_policy = ""
	I1212 23:20:23.100549  156765 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 23:20:23.100563  156765 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 23:20:23.100573  156765 command_runner.go:130] > # changing them here.
	I1212 23:20:23.100583  156765 command_runner.go:130] > # insecure_registries = [
	I1212 23:20:23.100592  156765 command_runner.go:130] > # ]
	I1212 23:20:23.100602  156765 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 23:20:23.100627  156765 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 23:20:23.100638  156765 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 23:20:23.100649  156765 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 23:20:23.100660  156765 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 23:20:23.100673  156765 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 23:20:23.100683  156765 command_runner.go:130] > # CNI plugins.
	I1212 23:20:23.100693  156765 command_runner.go:130] > [crio.network]
	I1212 23:20:23.100706  156765 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 23:20:23.100719  156765 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 23:20:23.100729  156765 command_runner.go:130] > # cni_default_network = ""
	I1212 23:20:23.100746  156765 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 23:20:23.100756  156765 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 23:20:23.100769  156765 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 23:20:23.100777  156765 command_runner.go:130] > # plugin_dirs = [
	I1212 23:20:23.100784  156765 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 23:20:23.100789  156765 command_runner.go:130] > # ]
	I1212 23:20:23.100802  156765 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 23:20:23.100812  156765 command_runner.go:130] > [crio.metrics]
	I1212 23:20:23.100821  156765 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 23:20:23.100832  156765 command_runner.go:130] > enable_metrics = true
	I1212 23:20:23.100843  156765 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 23:20:23.100854  156765 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 23:20:23.100867  156765 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 23:20:23.100879  156765 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 23:20:23.100893  156765 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 23:20:23.100904  156765 command_runner.go:130] > # metrics_collectors = [
	I1212 23:20:23.100914  156765 command_runner.go:130] > # 	"operations",
	I1212 23:20:23.100923  156765 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 23:20:23.100938  156765 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 23:20:23.100949  156765 command_runner.go:130] > # 	"operations_errors",
	I1212 23:20:23.100959  156765 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 23:20:23.100969  156765 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 23:20:23.100979  156765 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 23:20:23.100987  156765 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 23:20:23.100992  156765 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 23:20:23.101002  156765 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 23:20:23.101013  156765 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 23:20:23.101020  156765 command_runner.go:130] > # 	"containers_oom_total",
	I1212 23:20:23.101031  156765 command_runner.go:130] > # 	"containers_oom",
	I1212 23:20:23.101041  156765 command_runner.go:130] > # 	"processes_defunct",
	I1212 23:20:23.101051  156765 command_runner.go:130] > # 	"operations_total",
	I1212 23:20:23.101061  156765 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 23:20:23.101072  156765 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 23:20:23.101083  156765 command_runner.go:130] > # 	"operations_errors_total",
	I1212 23:20:23.101095  156765 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 23:20:23.101105  156765 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 23:20:23.101119  156765 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 23:20:23.101130  156765 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 23:20:23.101141  156765 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 23:20:23.101151  156765 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 23:20:23.101160  156765 command_runner.go:130] > # ]
	I1212 23:20:23.101171  156765 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 23:20:23.101180  156765 command_runner.go:130] > # metrics_port = 9090
	I1212 23:20:23.101188  156765 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 23:20:23.101199  156765 command_runner.go:130] > # metrics_socket = ""
	I1212 23:20:23.101211  156765 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 23:20:23.101228  156765 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 23:20:23.101241  156765 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 23:20:23.101252  156765 command_runner.go:130] > # certificate on any modification event.
	I1212 23:20:23.101262  156765 command_runner.go:130] > # metrics_cert = ""
	I1212 23:20:23.101274  156765 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 23:20:23.101282  156765 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 23:20:23.101291  156765 command_runner.go:130] > # metrics_key = ""
	I1212 23:20:23.101304  156765 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 23:20:23.101317  156765 command_runner.go:130] > [crio.tracing]
	I1212 23:20:23.101330  156765 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 23:20:23.101340  156765 command_runner.go:130] > # enable_tracing = false
	I1212 23:20:23.101352  156765 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 23:20:23.101361  156765 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 23:20:23.101369  156765 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 23:20:23.101380  156765 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 23:20:23.101394  156765 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 23:20:23.101403  156765 command_runner.go:130] > [crio.stats]
	I1212 23:20:23.101413  156765 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 23:20:23.101425  156765 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 23:20:23.101435  156765 command_runner.go:130] > # stats_collection_period = 0
	I1212 23:20:23.101488  156765 command_runner.go:130] ! time="2023-12-12 23:20:23.086210433Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 23:20:23.101518  156765 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 23:20:23.101628  156765 cni.go:84] Creating CNI manager for ""
	I1212 23:20:23.101644  156765 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:20:23.101665  156765 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:20:23.101693  156765 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-510563 NodeName:multinode-510563 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:20:23.101868  156765 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-510563"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:20:23.101976  156765 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-510563 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:20:23.102042  156765 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:20:23.112142  156765 command_runner.go:130] > kubeadm
	I1212 23:20:23.112163  156765 command_runner.go:130] > kubectl
	I1212 23:20:23.112168  156765 command_runner.go:130] > kubelet
	I1212 23:20:23.112228  156765 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:20:23.112305  156765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:20:23.121696  156765 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1212 23:20:23.138715  156765 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:20:23.155001  156765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1212 23:20:23.171581  156765 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I1212 23:20:23.175470  156765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:20:23.187943  156765 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563 for IP: 192.168.39.38
	I1212 23:20:23.187988  156765 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:20:23.188153  156765 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1212 23:20:23.188187  156765 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1212 23:20:23.188228  156765 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key
	I1212 23:20:23.188243  156765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt with IP's: []
	I1212 23:20:23.335573  156765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt ...
	I1212 23:20:23.335606  156765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt: {Name:mk731a446ec2af87a8ab57725257a46d38db0ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:20:23.335771  156765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key ...
	I1212 23:20:23.335782  156765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key: {Name:mk4db4c013fcacab8fcdfd6beb841886cb6845e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:20:23.335866  156765 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.key.383c1efe
	I1212 23:20:23.335879  156765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.crt.383c1efe with IP's: [192.168.39.38 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 23:20:23.401603  156765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.crt.383c1efe ...
	I1212 23:20:23.401632  156765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.crt.383c1efe: {Name:mk2a11c382a28e4375c58d4a6e78b93dbe4e0cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:20:23.401770  156765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.key.383c1efe ...
	I1212 23:20:23.401793  156765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.key.383c1efe: {Name:mkfd29accc2c16561658fac9beb2ffbc154fce07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:20:23.401861  156765 certs.go:337] copying /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.crt.383c1efe -> /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.crt
	I1212 23:20:23.401957  156765 certs.go:341] copying /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.key.383c1efe -> /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.key
	I1212 23:20:23.402015  156765 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.key
	I1212 23:20:23.402029  156765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.crt with IP's: []
	I1212 23:20:23.463275  156765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.crt ...
	I1212 23:20:23.463306  156765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.crt: {Name:mk9a74e582e0441b12ffefa00c32477ab9d8be02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:20:23.463454  156765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.key ...
	I1212 23:20:23.463466  156765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.key: {Name:mkb23525b1ff4cb88e2f4dd297e99b06e816d163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:20:23.463530  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 23:20:23.463553  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 23:20:23.463564  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 23:20:23.463574  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 23:20:23.463588  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:20:23.463601  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:20:23.463614  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:20:23.463623  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:20:23.463669  156765 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1212 23:20:23.463704  156765 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1212 23:20:23.463715  156765 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:20:23.463737  156765 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:20:23.463759  156765 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:20:23.463782  156765 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1212 23:20:23.463818  156765 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:20:23.463841  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:20:23.463854  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem -> /usr/share/ca-certificates/143541.pem
	I1212 23:20:23.463866  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> /usr/share/ca-certificates/1435412.pem
	I1212 23:20:23.464477  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:20:23.490355  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:20:23.514890  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:20:23.537670  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:20:23.560262  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:20:23.583992  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:20:23.607035  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:20:23.629944  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:20:23.652185  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:20:23.674501  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1212 23:20:23.696517  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1212 23:20:23.718491  156765 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:20:23.734246  156765 ssh_runner.go:195] Run: openssl version
	I1212 23:20:23.739765  156765 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 23:20:23.739834  156765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:20:23.748998  156765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:20:23.753356  156765 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:20:23.753387  156765 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:20:23.753427  156765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:20:23.758789  156765 command_runner.go:130] > b5213941
	I1212 23:20:23.758882  156765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:20:23.768201  156765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1212 23:20:23.777625  156765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1212 23:20:23.781961  156765 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1212 23:20:23.782147  156765 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1212 23:20:23.782193  156765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1212 23:20:23.787687  156765 command_runner.go:130] > 51391683
	I1212 23:20:23.787753  156765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1212 23:20:23.797553  156765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1212 23:20:23.807082  156765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1212 23:20:23.811489  156765 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1212 23:20:23.811516  156765 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1212 23:20:23.811547  156765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1212 23:20:23.817011  156765 command_runner.go:130] > 3ec20f2e
	I1212 23:20:23.817061  156765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:20:23.826223  156765 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:20:23.830232  156765 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:20:23.830345  156765 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:20:23.830417  156765 kubeadm.go:404] StartCluster: {Name:multinode-510563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:20:23.830510  156765 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:20:23.830569  156765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:20:23.866974  156765 cri.go:89] found id: ""
	I1212 23:20:23.867064  156765 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:20:23.876466  156765 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 23:20:23.876504  156765 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 23:20:23.876511  156765 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 23:20:23.876605  156765 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:20:23.885925  156765 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:20:23.894733  156765 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 23:20:23.894754  156765 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 23:20:23.894761  156765 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 23:20:23.894768  156765 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:20:23.894804  156765 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:20:23.894835  156765 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:20:24.014897  156765 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:20:24.014941  156765 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 23:20:24.015033  156765 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:20:24.015060  156765 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 23:20:24.251977  156765 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:20:24.252022  156765 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:20:24.252189  156765 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:20:24.252203  156765 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:20:24.252320  156765 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:20:24.252337  156765 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:20:24.471333  156765 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:20:24.471403  156765 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:20:24.646485  156765 out.go:204]   - Generating certificates and keys ...
	I1212 23:20:24.646626  156765 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:20:24.646653  156765 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 23:20:24.783871  156765 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:20:24.783920  156765 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 23:20:24.784051  156765 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:20:24.784067  156765 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:20:24.784210  156765 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:20:24.784220  156765 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:20:24.960573  156765 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 23:20:24.960621  156765 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 23:20:25.146960  156765 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 23:20:25.146987  156765 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 23:20:25.306513  156765 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 23:20:25.306546  156765 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 23:20:25.306707  156765 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-510563] and IPs [192.168.39.38 127.0.0.1 ::1]
	I1212 23:20:25.306720  156765 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-510563] and IPs [192.168.39.38 127.0.0.1 ::1]
	I1212 23:20:25.486518  156765 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 23:20:25.486548  156765 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 23:20:25.486705  156765 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-510563] and IPs [192.168.39.38 127.0.0.1 ::1]
	I1212 23:20:25.486719  156765 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-510563] and IPs [192.168.39.38 127.0.0.1 ::1]
	I1212 23:20:25.814512  156765 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:20:25.814556  156765 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:20:26.173447  156765 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:20:26.173479  156765 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:20:26.445299  156765 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 23:20:26.445334  156765 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 23:20:26.445449  156765 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:20:26.445482  156765 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:20:26.612765  156765 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:20:26.612799  156765 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:20:26.821655  156765 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:20:26.821686  156765 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:20:26.978620  156765 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:20:26.978655  156765 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:20:27.246661  156765 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:20:27.246701  156765 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:20:27.247148  156765 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:20:27.247167  156765 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:20:27.251899  156765 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:20:27.253852  156765 out.go:204]   - Booting up control plane ...
	I1212 23:20:27.251929  156765 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:20:27.254028  156765 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:20:27.254065  156765 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:20:27.254158  156765 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:20:27.254179  156765 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:20:27.254844  156765 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:20:27.254860  156765 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:20:27.269281  156765 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:20:27.269310  156765 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:20:27.270176  156765 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:20:27.270191  156765 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:20:27.270390  156765 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:20:27.270402  156765 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 23:20:27.398699  156765 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:20:27.398745  156765 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:20:34.900704  156765 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503020 seconds
	I1212 23:20:34.900744  156765 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.503020 seconds
	I1212 23:20:34.900866  156765 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:20:34.900879  156765 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:20:34.918657  156765 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:20:34.918666  156765 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:20:35.454798  156765 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:20:35.454836  156765 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:20:35.455151  156765 command_runner.go:130] > [mark-control-plane] Marking the node multinode-510563 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:20:35.455158  156765 kubeadm.go:322] [mark-control-plane] Marking the node multinode-510563 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:20:35.970259  156765 kubeadm.go:322] [bootstrap-token] Using token: 063pv9.o37urpf4zwmjnhn0
	I1212 23:20:35.970295  156765 command_runner.go:130] > [bootstrap-token] Using token: 063pv9.o37urpf4zwmjnhn0
	I1212 23:20:35.971757  156765 out.go:204]   - Configuring RBAC rules ...
	I1212 23:20:35.971865  156765 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:20:35.971881  156765 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:20:35.981331  156765 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:20:35.981353  156765 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:20:35.990623  156765 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:20:35.990642  156765 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:20:35.996269  156765 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:20:35.996299  156765 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:20:36.000014  156765 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:20:36.000036  156765 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:20:36.009222  156765 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:20:36.009240  156765 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:20:36.027334  156765 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:20:36.027380  156765 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:20:36.264646  156765 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:20:36.264673  156765 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 23:20:36.411878  156765 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:20:36.411920  156765 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 23:20:36.411949  156765 kubeadm.go:322] 
	I1212 23:20:36.412044  156765 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:20:36.412093  156765 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 23:20:36.412112  156765 kubeadm.go:322] 
	I1212 23:20:36.412224  156765 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:20:36.412237  156765 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 23:20:36.412243  156765 kubeadm.go:322] 
	I1212 23:20:36.412281  156765 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:20:36.412293  156765 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 23:20:36.412373  156765 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:20:36.412384  156765 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:20:36.412470  156765 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:20:36.412480  156765 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:20:36.412490  156765 kubeadm.go:322] 
	I1212 23:20:36.412573  156765 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:20:36.412586  156765 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 23:20:36.412593  156765 kubeadm.go:322] 
	I1212 23:20:36.412666  156765 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:20:36.412675  156765 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:20:36.412680  156765 kubeadm.go:322] 
	I1212 23:20:36.412741  156765 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:20:36.412751  156765 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 23:20:36.412842  156765 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:20:36.412856  156765 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:20:36.412946  156765 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:20:36.412954  156765 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:20:36.412961  156765 kubeadm.go:322] 
	I1212 23:20:36.413058  156765 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:20:36.413069  156765 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:20:36.413154  156765 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:20:36.413164  156765 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 23:20:36.413170  156765 kubeadm.go:322] 
	I1212 23:20:36.413282  156765 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 063pv9.o37urpf4zwmjnhn0 \
	I1212 23:20:36.413295  156765 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 063pv9.o37urpf4zwmjnhn0 \
	I1212 23:20:36.413432  156765 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1212 23:20:36.413443  156765 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1212 23:20:36.413474  156765 kubeadm.go:322] 	--control-plane 
	I1212 23:20:36.413484  156765 command_runner.go:130] > 	--control-plane 
	I1212 23:20:36.413490  156765 kubeadm.go:322] 
	I1212 23:20:36.413607  156765 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:20:36.413626  156765 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:20:36.413646  156765 kubeadm.go:322] 
	I1212 23:20:36.413744  156765 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 063pv9.o37urpf4zwmjnhn0 \
	I1212 23:20:36.413757  156765 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 063pv9.o37urpf4zwmjnhn0 \
	I1212 23:20:36.413901  156765 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1212 23:20:36.413917  156765 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1212 23:20:36.414110  156765 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:20:36.414120  156765 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:20:36.414140  156765 cni.go:84] Creating CNI manager for ""
	I1212 23:20:36.414153  156765 cni.go:136] 1 nodes found, recommending kindnet
	I1212 23:20:36.416027  156765 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 23:20:36.417414  156765 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 23:20:36.448865  156765 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 23:20:36.448903  156765 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 23:20:36.448920  156765 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 23:20:36.448930  156765 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:20:36.448939  156765 command_runner.go:130] > Access: 2023-12-12 23:20:03.268996615 +0000
	I1212 23:20:36.448951  156765 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 23:20:36.448958  156765 command_runner.go:130] > Change: 2023-12-12 23:20:01.386996615 +0000
	I1212 23:20:36.448967  156765 command_runner.go:130] >  Birth: -
	I1212 23:20:36.449028  156765 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 23:20:36.449046  156765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 23:20:36.500082  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 23:20:37.479179  156765 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 23:20:37.490063  156765 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 23:20:37.501875  156765 command_runner.go:130] > serviceaccount/kindnet created
	I1212 23:20:37.523632  156765 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 23:20:37.526547  156765 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.026421705s)
	I1212 23:20:37.526604  156765 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:20:37.526711  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:37.526721  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=multinode-510563 minikube.k8s.io/updated_at=2023_12_12T23_20_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:37.592668  156765 command_runner.go:130] > -16
	I1212 23:20:37.592717  156765 ops.go:34] apiserver oom_adj: -16
	I1212 23:20:37.680092  156765 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 23:20:37.685351  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:37.753695  156765 command_runner.go:130] > node/multinode-510563 labeled
	I1212 23:20:37.812473  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:37.812583  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:37.906320  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:38.408450  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:38.494700  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:38.907833  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:38.999030  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:39.408687  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:39.500248  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:39.908756  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:39.992264  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:40.408620  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:40.493851  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:40.907812  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:40.993767  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:41.408770  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:41.500819  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:41.907865  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:41.991371  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:42.408474  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:42.501387  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:42.907816  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:43.000992  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:43.408563  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:43.525171  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:43.908530  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:43.988922  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:44.408177  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:44.503448  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:44.908678  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:44.995736  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:45.408511  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:45.497352  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:45.907862  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:45.990933  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:46.408386  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:46.496844  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:46.908476  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:47.000412  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:47.408096  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:47.565025  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:47.908001  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:48.062710  156765 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 23:20:48.408287  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:20:48.509111  156765 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 23:20:48.510481  156765 command_runner.go:130] > default   0         0s
	I1212 23:20:48.512526  156765 kubeadm.go:1088] duration metric: took 10.985894291s to wait for elevateKubeSystemPrivileges.
	I1212 23:20:48.512560  156765 kubeadm.go:406] StartCluster complete in 24.682162719s
	I1212 23:20:48.512596  156765 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:20:48.512725  156765 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:20:48.513396  156765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:20:48.513626  156765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:20:48.513760  156765 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:20:48.513843  156765 addons.go:69] Setting storage-provisioner=true in profile "multinode-510563"
	I1212 23:20:48.513851  156765 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:20:48.513861  156765 addons.go:231] Setting addon storage-provisioner=true in "multinode-510563"
	I1212 23:20:48.513876  156765 addons.go:69] Setting default-storageclass=true in profile "multinode-510563"
	I1212 23:20:48.513903  156765 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-510563"
	I1212 23:20:48.513916  156765 host.go:66] Checking if "multinode-510563" exists ...
	I1212 23:20:48.514078  156765 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:20:48.514302  156765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:20:48.514340  156765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:20:48.514351  156765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:20:48.514380  156765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:20:48.514508  156765 kapi.go:59] client config for multinode-510563: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:20:48.515266  156765 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 23:20:48.515526  156765 round_trippers.go:463] GET https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:20:48.515541  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:48.515552  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:48.515562  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:48.528342  156765 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 23:20:48.528368  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:48.528378  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:48.528386  156765 round_trippers.go:580]     Content-Length: 291
	I1212 23:20:48.528392  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:48 GMT
	I1212 23:20:48.528403  156765 round_trippers.go:580]     Audit-Id: e5102f86-f05c-4a80-81d0-f8181cb5e0a4
	I1212 23:20:48.528411  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:48.528416  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:48.528422  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:48.528463  156765 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b98a4ef4-74a2-4d35-a29b-c065c5f3121c","resourceVersion":"346","creationTimestamp":"2023-12-12T23:20:36Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 23:20:48.528983  156765 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b98a4ef4-74a2-4d35-a29b-c065c5f3121c","resourceVersion":"346","creationTimestamp":"2023-12-12T23:20:36Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 23:20:48.529044  156765 round_trippers.go:463] PUT https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:20:48.529056  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:48.529067  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:48.529078  156765 round_trippers.go:473]     Content-Type: application/json
	I1212 23:20:48.529090  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:48.531496  156765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42119
	I1212 23:20:48.531685  156765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41179
	I1212 23:20:48.531904  156765 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:20:48.532108  156765 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:20:48.532453  156765 main.go:141] libmachine: Using API Version  1
	I1212 23:20:48.532478  156765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:20:48.532606  156765 main.go:141] libmachine: Using API Version  1
	I1212 23:20:48.532624  156765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:20:48.532810  156765 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:20:48.532969  156765 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:20:48.533126  156765 main.go:141] libmachine: (multinode-510563) Calling .GetState
	I1212 23:20:48.533350  156765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:20:48.533387  156765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:20:48.535081  156765 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:20:48.535326  156765 kapi.go:59] client config for multinode-510563: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:20:48.535544  156765 addons.go:231] Setting addon default-storageclass=true in "multinode-510563"
	I1212 23:20:48.535572  156765 host.go:66] Checking if "multinode-510563" exists ...
	I1212 23:20:48.535884  156765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:20:48.535923  156765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:20:48.544354  156765 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1212 23:20:48.544389  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:48.544405  156765 round_trippers.go:580]     Audit-Id: 7530c8a2-4e1a-4aa3-b3ea-ef8c0112c2e3
	I1212 23:20:48.544417  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:48.544442  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:48.544450  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:48.544459  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:48.544468  156765 round_trippers.go:580]     Content-Length: 291
	I1212 23:20:48.544480  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:48 GMT
	I1212 23:20:48.544506  156765 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b98a4ef4-74a2-4d35-a29b-c065c5f3121c","resourceVersion":"354","creationTimestamp":"2023-12-12T23:20:36Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 23:20:48.544678  156765 round_trippers.go:463] GET https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:20:48.544695  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:48.544705  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:48.544714  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:48.547490  156765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43097
	I1212 23:20:48.547878  156765 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:20:48.548308  156765 main.go:141] libmachine: Using API Version  1
	I1212 23:20:48.548329  156765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:20:48.548651  156765 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:20:48.548834  156765 main.go:141] libmachine: (multinode-510563) Calling .GetState
	I1212 23:20:48.550313  156765 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:20:48.552150  156765 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:20:48.553531  156765 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:20:48.553059  156765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33643
	I1212 23:20:48.553549  156765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:20:48.553567  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:20:48.553954  156765 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:20:48.554470  156765 main.go:141] libmachine: Using API Version  1
	I1212 23:20:48.554493  156765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:20:48.554872  156765 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:20:48.555445  156765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:20:48.555489  156765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:20:48.556471  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:48.556921  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:48.556951  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:48.557099  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:20:48.557286  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:48.557449  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:20:48.557581  156765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:20:48.560595  156765 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1212 23:20:48.560623  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:48.560632  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:48.560640  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:48.560647  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:48.560654  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:48.560662  156765 round_trippers.go:580]     Content-Length: 291
	I1212 23:20:48.560683  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:48 GMT
	I1212 23:20:48.560690  156765 round_trippers.go:580]     Audit-Id: 8d3513a2-6f29-4940-877b-24dcaea66ad6
	I1212 23:20:48.560714  156765 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b98a4ef4-74a2-4d35-a29b-c065c5f3121c","resourceVersion":"354","creationTimestamp":"2023-12-12T23:20:36Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 23:20:48.560816  156765 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-510563" context rescaled to 1 replicas
	I1212 23:20:48.560847  156765 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:20:48.562674  156765 out.go:177] * Verifying Kubernetes components...
	I1212 23:20:48.564457  156765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:20:48.570910  156765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41745
	I1212 23:20:48.571341  156765 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:20:48.571822  156765 main.go:141] libmachine: Using API Version  1
	I1212 23:20:48.571842  156765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:20:48.572157  156765 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:20:48.572326  156765 main.go:141] libmachine: (multinode-510563) Calling .GetState
	I1212 23:20:48.573913  156765 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:20:48.574150  156765 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:20:48.574165  156765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:20:48.574177  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:20:48.577078  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:48.577507  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:20:48.577523  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:20:48.577778  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:20:48.577942  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:20:48.578116  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:20:48.578241  156765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:20:48.663275  156765 command_runner.go:130] > apiVersion: v1
	I1212 23:20:48.663300  156765 command_runner.go:130] > data:
	I1212 23:20:48.663307  156765 command_runner.go:130] >   Corefile: |
	I1212 23:20:48.663312  156765 command_runner.go:130] >     .:53 {
	I1212 23:20:48.663316  156765 command_runner.go:130] >         errors
	I1212 23:20:48.663320  156765 command_runner.go:130] >         health {
	I1212 23:20:48.663336  156765 command_runner.go:130] >            lameduck 5s
	I1212 23:20:48.663340  156765 command_runner.go:130] >         }
	I1212 23:20:48.663344  156765 command_runner.go:130] >         ready
	I1212 23:20:48.663352  156765 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 23:20:48.663360  156765 command_runner.go:130] >            pods insecure
	I1212 23:20:48.663367  156765 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 23:20:48.663378  156765 command_runner.go:130] >            ttl 30
	I1212 23:20:48.663385  156765 command_runner.go:130] >         }
	I1212 23:20:48.663392  156765 command_runner.go:130] >         prometheus :9153
	I1212 23:20:48.663406  156765 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 23:20:48.663414  156765 command_runner.go:130] >            max_concurrent 1000
	I1212 23:20:48.663418  156765 command_runner.go:130] >         }
	I1212 23:20:48.663422  156765 command_runner.go:130] >         cache 30
	I1212 23:20:48.663426  156765 command_runner.go:130] >         loop
	I1212 23:20:48.663430  156765 command_runner.go:130] >         reload
	I1212 23:20:48.663436  156765 command_runner.go:130] >         loadbalance
	I1212 23:20:48.663440  156765 command_runner.go:130] >     }
	I1212 23:20:48.663444  156765 command_runner.go:130] > kind: ConfigMap
	I1212 23:20:48.663454  156765 command_runner.go:130] > metadata:
	I1212 23:20:48.663461  156765 command_runner.go:130] >   creationTimestamp: "2023-12-12T23:20:36Z"
	I1212 23:20:48.663464  156765 command_runner.go:130] >   name: coredns
	I1212 23:20:48.663469  156765 command_runner.go:130] >   namespace: kube-system
	I1212 23:20:48.663476  156765 command_runner.go:130] >   resourceVersion: "267"
	I1212 23:20:48.663481  156765 command_runner.go:130] >   uid: 15ab4162-ef32-4564-b7ab-f6d6948ed723
	I1212 23:20:48.665293  156765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:20:48.665378  156765 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:20:48.665631  156765 kapi.go:59] client config for multinode-510563: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:20:48.665907  156765 node_ready.go:35] waiting up to 6m0s for node "multinode-510563" to be "Ready" ...
	I1212 23:20:48.665983  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:48.665990  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:48.665997  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:48.666006  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:48.668155  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:48.668169  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:48.668176  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:48.668183  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:48.668192  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:48 GMT
	I1212 23:20:48.668197  156765 round_trippers.go:580]     Audit-Id: 0d7f6248-289d-4adf-b05f-70dd56502b20
	I1212 23:20:48.668203  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:48.668211  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:48.668346  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"333","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:2
0:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 5988 chars]
	I1212 23:20:48.668904  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:48.668920  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:48.668927  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:48.668933  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:48.670705  156765 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:20:48.670722  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:48.670731  156765 round_trippers.go:580]     Audit-Id: abdabe85-96a4-4b6e-bcd1-62f268724689
	I1212 23:20:48.670739  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:48.670745  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:48.670753  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:48.670761  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:48.670780  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:48 GMT
	I1212 23:20:48.671100  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"333","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:2
0:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 5988 chars]
	I1212 23:20:48.720184  156765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:20:48.757298  156765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:20:49.171935  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:49.171959  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:49.171974  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:49.171985  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:49.259769  156765 round_trippers.go:574] Response Status: 200 OK in 87 milliseconds
	I1212 23:20:49.259790  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:49.259797  156765 round_trippers.go:580]     Audit-Id: ad6fc8f8-9dac-48e5-a0eb-f26ed7a1b28e
	I1212 23:20:49.259803  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:49.259808  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:49.259814  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:49.259828  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:49.259836  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:49 GMT
	I1212 23:20:49.284361  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:49.455611  156765 command_runner.go:130] > configmap/coredns replaced
	I1212 23:20:49.455654  156765 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 23:20:49.563733  156765 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 23:20:49.574637  156765 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 23:20:49.585659  156765 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:20:49.593939  156765 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 23:20:49.606469  156765 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 23:20:49.615081  156765 command_runner.go:130] > pod/storage-provisioner created
	I1212 23:20:49.617723  156765 main.go:141] libmachine: Making call to close driver server
	I1212 23:20:49.617742  156765 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 23:20:49.617752  156765 main.go:141] libmachine: (multinode-510563) Calling .Close
	I1212 23:20:49.617773  156765 main.go:141] libmachine: Making call to close driver server
	I1212 23:20:49.617787  156765 main.go:141] libmachine: (multinode-510563) Calling .Close
	I1212 23:20:49.618052  156765 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:20:49.618080  156765 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:20:49.618100  156765 main.go:141] libmachine: Making call to close driver server
	I1212 23:20:49.618104  156765 main.go:141] libmachine: (multinode-510563) DBG | Closing plugin on server side
	I1212 23:20:49.618110  156765 main.go:141] libmachine: (multinode-510563) Calling .Close
	I1212 23:20:49.618182  156765 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:20:49.618197  156765 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:20:49.618221  156765 main.go:141] libmachine: Making call to close driver server
	I1212 23:20:49.618219  156765 main.go:141] libmachine: (multinode-510563) DBG | Closing plugin on server side
	I1212 23:20:49.618242  156765 main.go:141] libmachine: (multinode-510563) Calling .Close
	I1212 23:20:49.618337  156765 main.go:141] libmachine: (multinode-510563) DBG | Closing plugin on server side
	I1212 23:20:49.618441  156765 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:20:49.618444  156765 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:20:49.618461  156765 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:20:49.618480  156765 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:20:49.618598  156765 round_trippers.go:463] GET https://192.168.39.38:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 23:20:49.618609  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:49.618620  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:49.618631  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:49.621685  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:49.621729  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:49.621740  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:49.621748  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:49.621765  156765 round_trippers.go:580]     Content-Length: 1273
	I1212 23:20:49.621776  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:49 GMT
	I1212 23:20:49.621786  156765 round_trippers.go:580]     Audit-Id: e4ab83d9-b88f-4d50-8c22-aff3cf3532b8
	I1212 23:20:49.621797  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:49.621807  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:49.621913  156765 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"standard","uid":"0b5b7faa-7796-4e74-b896-1e06812304c0","resourceVersion":"403","creationTimestamp":"2023-12-12T23:20:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:20:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 23:20:49.622465  156765 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"0b5b7faa-7796-4e74-b896-1e06812304c0","resourceVersion":"403","creationTimestamp":"2023-12-12T23:20:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:20:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:20:49.622534  156765 round_trippers.go:463] PUT https://192.168.39.38:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 23:20:49.622550  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:49.622560  156765 round_trippers.go:473]     Content-Type: application/json
	I1212 23:20:49.622571  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:49.622583  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:49.625842  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:49.625860  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:49.625867  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:49.625872  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:49.625877  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:49.625882  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:49.625887  156765 round_trippers.go:580]     Content-Length: 1220
	I1212 23:20:49.625892  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:49 GMT
	I1212 23:20:49.625903  156765 round_trippers.go:580]     Audit-Id: c748fe13-dd74-45c6-b92f-c5aae4090a4b
	I1212 23:20:49.625943  156765 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"0b5b7faa-7796-4e74-b896-1e06812304c0","resourceVersion":"403","creationTimestamp":"2023-12-12T23:20:49Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T23:20:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 23:20:49.626081  156765 main.go:141] libmachine: Making call to close driver server
	I1212 23:20:49.626098  156765 main.go:141] libmachine: (multinode-510563) Calling .Close
	I1212 23:20:49.626315  156765 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:20:49.626334  156765 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:20:49.626345  156765 main.go:141] libmachine: (multinode-510563) DBG | Closing plugin on server side
	I1212 23:20:49.628069  156765 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 23:20:49.629370  156765 addons.go:502] enable addons completed in 1.115617674s: enabled=[storage-provisioner default-storageclass]
	I1212 23:20:49.671991  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:49.672013  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:49.672025  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:49.672033  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:49.675952  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:49.675985  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:49.675996  156765 round_trippers.go:580]     Audit-Id: 92c9c808-45b9-4ab0-b5e6-3244211b2ab5
	I1212 23:20:49.676005  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:49.676014  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:49.676022  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:49.676029  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:49.676038  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:49 GMT
	I1212 23:20:49.678425  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:50.171632  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:50.171657  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:50.171665  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:50.171680  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:50.174464  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:50.174504  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:50.174515  156765 round_trippers.go:580]     Audit-Id: 8f46f7ad-48d2-43b1-88e5-cc8f69dba87b
	I1212 23:20:50.174525  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:50.174532  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:50.174540  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:50.174548  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:50.174561  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:50 GMT
	I1212 23:20:50.175264  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:50.671904  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:50.671930  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:50.671938  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:50.671944  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:50.674738  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:50.674765  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:50.674775  156765 round_trippers.go:580]     Audit-Id: 778b647b-8f0c-4dc6-8a7a-6b605be65b10
	I1212 23:20:50.674784  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:50.674794  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:50.674802  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:50.674812  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:50.674821  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:50 GMT
	I1212 23:20:50.675393  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:50.675762  156765 node_ready.go:58] node "multinode-510563" has status "Ready":"False"
	I1212 23:20:51.171669  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:51.171692  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:51.171700  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:51.171707  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:51.174389  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:51.174415  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:51.174426  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:51.174432  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:51.174437  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:51.174442  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:51.174447  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:51 GMT
	I1212 23:20:51.174465  156765 round_trippers.go:580]     Audit-Id: b8922f3f-0aa8-4c02-b194-238d36079d36
	I1212 23:20:51.175076  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:51.671653  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:51.671681  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:51.671694  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:51.671703  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:51.674929  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:51.674948  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:51.674955  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:51.674960  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:51.674966  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:51.674971  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:51 GMT
	I1212 23:20:51.674978  156765 round_trippers.go:580]     Audit-Id: f4c9ac58-8daf-4c50-8339-ad3f0bc3d516
	I1212 23:20:51.675004  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:51.675882  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:52.171570  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:52.171603  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:52.171612  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:52.171617  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:52.175034  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:52.175051  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:52.175058  156765 round_trippers.go:580]     Audit-Id: 933e7e54-990c-4ac9-9e07-070986da01f2
	I1212 23:20:52.175065  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:52.175073  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:52.175081  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:52.175089  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:52.175098  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:52 GMT
	I1212 23:20:52.175516  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:52.672237  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:52.672266  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:52.672274  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:52.672280  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:52.674869  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:52.674903  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:52.674912  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:52 GMT
	I1212 23:20:52.674920  156765 round_trippers.go:580]     Audit-Id: 08c267d7-15a5-4cf2-b9e5-fbe18cb888fb
	I1212 23:20:52.674927  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:52.674938  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:52.674946  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:52.674954  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:52.675274  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:53.171944  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:53.171973  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:53.171981  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:53.171987  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:53.176101  156765 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:20:53.176128  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:53.176138  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:53.176146  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:53 GMT
	I1212 23:20:53.176154  156765 round_trippers.go:580]     Audit-Id: 2d4a2103-7a02-473c-9b93-46845ad82c30
	I1212 23:20:53.176176  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:53.176184  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:53.176193  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:53.176933  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:53.177242  156765 node_ready.go:58] node "multinode-510563" has status "Ready":"False"
	I1212 23:20:53.672307  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:53.672330  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:53.672338  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:53.672348  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:53.674992  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:53.675013  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:53.675020  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:53.675025  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:53 GMT
	I1212 23:20:53.675030  156765 round_trippers.go:580]     Audit-Id: f08bed41-34b3-4a0b-85ff-2b364c66d4f6
	I1212 23:20:53.675047  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:53.675055  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:53.675062  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:53.675957  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:54.172558  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:54.172587  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:54.172600  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:54.172610  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:54.175522  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:54.175560  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:54.175574  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:54 GMT
	I1212 23:20:54.175582  156765 round_trippers.go:580]     Audit-Id: 406ad359-0498-4171-bfc5-7b3681430332
	I1212 23:20:54.175595  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:54.175603  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:54.175610  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:54.175618  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:54.175917  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:54.671721  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:54.671752  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:54.671760  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:54.671771  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:54.675554  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:54.675587  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:54.675598  156765 round_trippers.go:580]     Audit-Id: e40bf210-6c48-4a2f-983c-bab9909d1ce1
	I1212 23:20:54.675608  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:54.675617  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:54.675626  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:54.675634  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:54.675643  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:54 GMT
	I1212 23:20:54.676633  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:55.171761  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:55.171800  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:55.171812  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:55.171821  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:55.175220  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:55.175241  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:55.175248  156765 round_trippers.go:580]     Audit-Id: 8779a3eb-228e-4d10-ae58-9a1bbe6a5f84
	I1212 23:20:55.175258  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:55.175266  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:55.175276  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:55.175284  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:55.175308  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:55 GMT
	I1212 23:20:55.175739  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:55.672414  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:55.672460  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:55.672468  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:55.672474  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:55.675692  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:55.675720  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:55.675735  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:55.675741  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:55.675747  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:55.675753  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:55 GMT
	I1212 23:20:55.675758  156765 round_trippers.go:580]     Audit-Id: 826d9cd0-652d-43af-84c3-91e155e77df8
	I1212 23:20:55.675763  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:55.676037  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"370","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1212 23:20:55.676337  156765 node_ready.go:58] node "multinode-510563" has status "Ready":"False"
	I1212 23:20:56.171680  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:56.171706  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:56.171715  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:56.171721  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:56.174813  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:56.174834  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:56.174844  156765 round_trippers.go:580]     Audit-Id: efce0ab9-267a-4ea0-9179-876d1b203280
	I1212 23:20:56.174852  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:56.174861  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:56.174873  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:56.174880  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:56.174887  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:56 GMT
	I1212 23:20:56.175094  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:20:56.175396  156765 node_ready.go:49] node "multinode-510563" has status "Ready":"True"
	I1212 23:20:56.175411  156765 node_ready.go:38] duration metric: took 7.509483513s waiting for node "multinode-510563" to be "Ready" ...
	I1212 23:20:56.175420  156765 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:20:56.175474  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:20:56.175483  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:56.175489  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:56.175495  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:56.185982  156765 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1212 23:20:56.186055  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:56.186073  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:56.186081  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:56.186095  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:56.186103  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:56 GMT
	I1212 23:20:56.186114  156765 round_trippers.go:580]     Audit-Id: ec8b19f1-c372-43b2-898a-fdcf93fd3608
	I1212 23:20:56.186123  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:56.187633  156765 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"438","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54778 chars]
	I1212 23:20:56.191395  156765 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace to be "Ready" ...
	I1212 23:20:56.191482  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:20:56.191493  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:56.191500  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:56.191509  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:56.196403  156765 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:20:56.196424  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:56.196443  156765 round_trippers.go:580]     Audit-Id: 34287a2d-28b0-4953-b8b5-548197b3f96f
	I1212 23:20:56.196452  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:56.196462  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:56.196472  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:56.196482  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:56.196496  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:56 GMT
	I1212 23:20:56.198111  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"438","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:20:56.198789  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:56.198806  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:56.198814  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:56.198820  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:56.202163  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:56.202184  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:56.202191  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:56.202197  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:56.202202  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:56.202207  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:56.202213  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:56 GMT
	I1212 23:20:56.202219  156765 round_trippers.go:580]     Audit-Id: 88086158-7e9c-430e-b5b2-030722e77803
	I1212 23:20:56.202725  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:20:56.203184  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:20:56.203222  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:56.203234  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:56.203243  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:56.206254  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:56.206270  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:56.206276  156765 round_trippers.go:580]     Audit-Id: 246a666a-42d7-4797-a1b8-20903021fd58
	I1212 23:20:56.206281  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:56.206286  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:56.206291  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:56.206296  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:56.206301  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:56 GMT
	I1212 23:20:56.206895  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"438","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:20:56.207378  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:56.207403  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:56.207414  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:56.207423  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:56.212617  156765 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:20:56.212636  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:56.212646  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:56.212654  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:56 GMT
	I1212 23:20:56.212662  156765 round_trippers.go:580]     Audit-Id: 5e1bc311-caf1-47bd-a247-0889b63b3017
	I1212 23:20:56.212670  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:56.212678  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:56.212690  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:56.212853  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:20:56.714076  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:20:56.714105  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:56.714117  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:56.714127  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:56.721885  156765 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1212 23:20:56.721907  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:56.721930  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:56.721936  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:56 GMT
	I1212 23:20:56.721941  156765 round_trippers.go:580]     Audit-Id: 4e4e948e-2215-4eb7-afc4-a7266803d005
	I1212 23:20:56.721946  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:56.721951  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:56.721956  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:56.722616  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"438","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:20:56.723073  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:56.723086  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:56.723093  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:56.723102  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:56.727235  156765 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:20:56.727257  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:56.727266  156765 round_trippers.go:580]     Audit-Id: 75bdfc17-40d4-4911-b49f-13fc23174504
	I1212 23:20:56.727275  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:56.727283  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:56.727289  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:56.727294  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:56.727300  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:56 GMT
	I1212 23:20:56.727805  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:20:57.213431  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:20:57.213456  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:57.213465  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:57.213471  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:57.224922  156765 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1212 23:20:57.224949  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:57.224960  156765 round_trippers.go:580]     Audit-Id: df4209d1-eb36-4771-8b4f-b9cc960c1778
	I1212 23:20:57.224969  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:57.224977  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:57.224982  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:57.224989  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:57.224998  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:57 GMT
	I1212 23:20:57.225154  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"438","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1212 23:20:57.225592  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:57.225605  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:57.225613  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:57.225619  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:57.228969  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:57.228988  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:57.228994  156765 round_trippers.go:580]     Audit-Id: 3e80d3ea-9de0-4ecc-9375-676369fa3a00
	I1212 23:20:57.229000  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:57.229005  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:57.229010  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:57.229015  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:57.229020  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:57 GMT
	I1212 23:20:57.229188  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:20:57.713788  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:20:57.713817  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:57.713829  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:57.713838  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:57.717036  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:57.717062  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:57.717071  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:57 GMT
	I1212 23:20:57.717081  156765 round_trippers.go:580]     Audit-Id: c3098637-3064-4519-912b-7ba3f0f8582b
	I1212 23:20:57.717089  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:57.717098  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:57.717106  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:57.717115  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:57.717261  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"456","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1212 23:20:57.717723  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:57.717740  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:57.717751  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:57.717760  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:57.720481  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:57.720504  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:57.720513  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:57.720519  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:57.720524  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:57.720529  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:57.720534  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:57 GMT
	I1212 23:20:57.720539  156765 round_trippers.go:580]     Audit-Id: 670847b8-0922-43af-b2aa-0a5e0fe76945
	I1212 23:20:57.721176  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:20:57.721457  156765 pod_ready.go:92] pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace has status "Ready":"True"
	I1212 23:20:57.721474  156765 pod_ready.go:81] duration metric: took 1.530053029s waiting for pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace to be "Ready" ...
	I1212 23:20:57.721482  156765 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:20:57.721532  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:20:57.721541  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:57.721548  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:57.721553  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:57.723660  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:57.723680  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:57.723689  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:57.723698  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:57 GMT
	I1212 23:20:57.723707  156765 round_trippers.go:580]     Audit-Id: 2d61ed1c-2614-444e-86ff-3b256d67f4e4
	I1212 23:20:57.723716  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:57.723724  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:57.723731  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:57.723878  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"442","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1212 23:20:57.724205  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:57.724217  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:57.724224  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:57.724230  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:57.727379  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:57.727402  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:57.727409  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:57 GMT
	I1212 23:20:57.727414  156765 round_trippers.go:580]     Audit-Id: 1ca2adef-1791-4c71-8b17-36b507efaad9
	I1212 23:20:57.727420  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:57.727425  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:57.727430  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:57.727435  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:57.727716  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:20:57.728003  156765 pod_ready.go:92] pod "etcd-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:20:57.728023  156765 pod_ready.go:81] duration metric: took 6.534988ms waiting for pod "etcd-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:20:57.728033  156765 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:20:57.728083  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-510563
	I1212 23:20:57.728091  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:57.728097  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:57.728107  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:57.730322  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:57.730339  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:57.730346  156765 round_trippers.go:580]     Audit-Id: b94a3c5f-fa69-432c-a79b-66b56f4ff762
	I1212 23:20:57.730351  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:57.730356  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:57.730361  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:57.730366  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:57.730371  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:57 GMT
	I1212 23:20:57.730539  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-510563","namespace":"kube-system","uid":"e8a8ed00-d13d-44f0-b7d6-b42bf1342d95","resourceVersion":"439","creationTimestamp":"2023-12-12T23:20:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.38:8443","kubernetes.io/config.hash":"4b970951c1b4ca2bc525afa7c2eb2fef","kubernetes.io/config.mirror":"4b970951c1b4ca2bc525afa7c2eb2fef","kubernetes.io/config.seen":"2023-12-12T23:20:27.932579600Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1212 23:20:57.730883  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:57.730910  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:57.730917  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:57.730929  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:57.733259  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:57.733279  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:57.733288  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:57.733298  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:57.733324  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:57.733334  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:57.733340  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:57 GMT
	I1212 23:20:57.733347  156765 round_trippers.go:580]     Audit-Id: 6c82eedc-ad4d-435c-af61-976fd3ca8b33
	I1212 23:20:57.733700  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:20:57.734003  156765 pod_ready.go:92] pod "kube-apiserver-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:20:57.734018  156765 pod_ready.go:81] duration metric: took 5.972502ms waiting for pod "kube-apiserver-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:20:57.734027  156765 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:20:57.734068  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-510563
	I1212 23:20:57.734076  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:57.734082  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:57.734088  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:57.735821  156765 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:20:57.735838  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:57.735845  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:57 GMT
	I1212 23:20:57.735850  156765 round_trippers.go:580]     Audit-Id: 5d5929b4-0be2-43a5-a137-99b5cf06b41a
	I1212 23:20:57.735855  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:57.735860  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:57.735867  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:57.735873  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:57.736043  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-510563","namespace":"kube-system","uid":"efdc7f68-25d6-4f6a-ab8f-1dec43407375","resourceVersion":"440","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"50f588add554ab298cca0792048dbecc","kubernetes.io/config.mirror":"50f588add554ab298cca0792048dbecc","kubernetes.io/config.seen":"2023-12-12T23:20:36.354954910Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1212 23:20:57.772683  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:57.772709  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:57.772718  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:57.772724  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:57.775231  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:57.775249  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:57.775255  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:57.775261  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:57 GMT
	I1212 23:20:57.775269  156765 round_trippers.go:580]     Audit-Id: 688785fa-eb42-479f-9e92-bd247c9bde47
	I1212 23:20:57.775277  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:57.775285  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:57.775295  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:57.775442  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:20:57.775733  156765 pod_ready.go:92] pod "kube-controller-manager-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:20:57.775748  156765 pod_ready.go:81] duration metric: took 41.714811ms waiting for pod "kube-controller-manager-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:20:57.775757  156765 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hspw8" in "kube-system" namespace to be "Ready" ...
	I1212 23:20:57.972269  156765 request.go:629] Waited for 196.427663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hspw8
	I1212 23:20:57.972333  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hspw8
	I1212 23:20:57.972338  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:57.972346  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:57.972352  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:57.975061  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:57.975089  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:57.975097  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:57 GMT
	I1212 23:20:57.975103  156765 round_trippers.go:580]     Audit-Id: 14ed2215-f336-4537-a282-48785708137f
	I1212 23:20:57.975108  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:57.975112  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:57.975117  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:57.975123  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:57.975314  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hspw8","generateName":"kube-proxy-","namespace":"kube-system","uid":"a2255be6-8705-40cd-8f35-a3e82906190c","resourceVersion":"421","creationTimestamp":"2023-12-12T23:20:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1212 23:20:58.172130  156765 request.go:629] Waited for 196.389415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:58.172217  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:58.172222  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:58.172231  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:58.172237  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:58.175791  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:58.175817  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:58.175824  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:58.175830  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:58 GMT
	I1212 23:20:58.175835  156765 round_trippers.go:580]     Audit-Id: 4042629b-df84-457d-ad7a-4fc2a1a35fec
	I1212 23:20:58.175840  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:58.175846  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:58.175851  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:58.176986  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:20:58.177283  156765 pod_ready.go:92] pod "kube-proxy-hspw8" in "kube-system" namespace has status "Ready":"True"
	I1212 23:20:58.177296  156765 pod_ready.go:81] duration metric: took 401.534401ms waiting for pod "kube-proxy-hspw8" in "kube-system" namespace to be "Ready" ...
	I1212 23:20:58.177305  156765 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:20:58.371711  156765 request.go:629] Waited for 194.330072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-510563
	I1212 23:20:58.371791  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-510563
	I1212 23:20:58.371796  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:58.371804  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:58.371810  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:58.374743  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:58.374767  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:58.374776  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:58.374784  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:58.374791  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:58.374799  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:58 GMT
	I1212 23:20:58.374809  156765 round_trippers.go:580]     Audit-Id: ebc9f3cf-67bf-442e-96cd-7276de8c7ac7
	I1212 23:20:58.374817  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:58.374915  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-510563","namespace":"kube-system","uid":"044da73c-9466-4a43-b283-5f4b9cc04df9","resourceVersion":"441","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fdb335c77d5fb1581ea23fa0adf419e9","kubernetes.io/config.mirror":"fdb335c77d5fb1581ea23fa0adf419e9","kubernetes.io/config.seen":"2023-12-12T23:20:36.354955844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1212 23:20:58.572005  156765 request.go:629] Waited for 196.726664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:58.572071  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:20:58.572076  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:58.572084  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:58.572103  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:58.575045  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:58.575067  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:58.575076  156765 round_trippers.go:580]     Audit-Id: a2faf19f-3e1d-49d7-965c-4e9fdc638b21
	I1212 23:20:58.575085  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:58.575102  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:58.575114  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:58.575120  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:58.575125  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:58 GMT
	I1212 23:20:58.575660  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:20:58.575963  156765 pod_ready.go:92] pod "kube-scheduler-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:20:58.575980  156765 pod_ready.go:81] duration metric: took 398.668341ms waiting for pod "kube-scheduler-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:20:58.575990  156765 pod_ready.go:38] duration metric: took 2.400560491s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:20:58.576005  156765 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:20:58.576056  156765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:20:58.592411  156765 command_runner.go:130] > 1104
	I1212 23:20:58.592499  156765 api_server.go:72] duration metric: took 10.0316209s to wait for apiserver process to appear ...
	I1212 23:20:58.592510  156765 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:20:58.592524  156765 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I1212 23:20:58.598909  156765 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I1212 23:20:58.598989  156765 round_trippers.go:463] GET https://192.168.39.38:8443/version
	I1212 23:20:58.599002  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:58.599012  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:58.599024  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:58.600214  156765 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:20:58.600230  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:58.600236  156765 round_trippers.go:580]     Audit-Id: 857ee718-9bf4-4fe0-9129-ffc3ffe09c7a
	I1212 23:20:58.600242  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:58.600247  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:58.600252  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:58.600257  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:58.600265  156765 round_trippers.go:580]     Content-Length: 264
	I1212 23:20:58.600270  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:58 GMT
	I1212 23:20:58.600287  156765 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 23:20:58.600362  156765 api_server.go:141] control plane version: v1.28.4
	I1212 23:20:58.600378  156765 api_server.go:131] duration metric: took 7.862707ms to wait for apiserver health ...
	I1212 23:20:58.600385  156765 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:20:58.771754  156765 request.go:629] Waited for 171.297779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:20:58.771814  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:20:58.771819  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:58.771827  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:58.771833  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:58.775443  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:58.775473  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:58.775483  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:58.775491  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:58.775496  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:58.775506  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:58 GMT
	I1212 23:20:58.775511  156765 round_trippers.go:580]     Audit-Id: 857c1fe8-5b40-4bcd-81a2-bbdc608a097a
	I1212 23:20:58.775517  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:58.777027  156765 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"456","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I1212 23:20:58.778687  156765 system_pods.go:59] 8 kube-system pods found
	I1212 23:20:58.778718  156765 system_pods.go:61] "coredns-5dd5756b68-zcxks" [503de693-19d6-45c5-97c6-3b8e5657bfee] Running
	I1212 23:20:58.778724  156765 system_pods.go:61] "etcd-multinode-510563" [2748a67b-24f2-4b90-bf95-eb56755a397a] Running
	I1212 23:20:58.778748  156765 system_pods.go:61] "kindnet-v4js8" [cfe24f85-472c-4ef2-9a48-9e3647cc8feb] Running
	I1212 23:20:58.778753  156765 system_pods.go:61] "kube-apiserver-multinode-510563" [e8a8ed00-d13d-44f0-b7d6-b42bf1342d95] Running
	I1212 23:20:58.778757  156765 system_pods.go:61] "kube-controller-manager-multinode-510563" [efdc7f68-25d6-4f6a-ab8f-1dec43407375] Running
	I1212 23:20:58.778761  156765 system_pods.go:61] "kube-proxy-hspw8" [a2255be6-8705-40cd-8f35-a3e82906190c] Running
	I1212 23:20:58.778765  156765 system_pods.go:61] "kube-scheduler-multinode-510563" [044da73c-9466-4a43-b283-5f4b9cc04df9] Running
	I1212 23:20:58.778769  156765 system_pods.go:61] "storage-provisioner" [cb4f186a-9bb9-488f-8a74-6e01f352fc05] Running
	I1212 23:20:58.778778  156765 system_pods.go:74] duration metric: took 178.388898ms to wait for pod list to return data ...
	I1212 23:20:58.778785  156765 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:20:58.972237  156765 request.go:629] Waited for 193.382252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:20:58.972320  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:20:58.972326  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:58.972337  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:58.972348  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:58.975474  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:58.975498  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:58.975505  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:58 GMT
	I1212 23:20:58.975511  156765 round_trippers.go:580]     Audit-Id: 5574715b-5bf3-40ea-b862-86b9a16cf094
	I1212 23:20:58.975516  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:58.975521  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:58.975526  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:58.975531  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:58.975536  156765 round_trippers.go:580]     Content-Length: 261
	I1212 23:20:58.975555  156765 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"477a27c6-8724-40b2-af7e-afc80b75b08c","resourceVersion":"350","creationTimestamp":"2023-12-12T23:20:48Z"}}]}
	I1212 23:20:58.975721  156765 default_sa.go:45] found service account: "default"
	I1212 23:20:58.975737  156765 default_sa.go:55] duration metric: took 196.946262ms for default service account to be created ...
	I1212 23:20:58.975745  156765 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:20:59.172194  156765 request.go:629] Waited for 196.38414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:20:59.172276  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:20:59.172284  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:59.172294  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:59.172304  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:59.176179  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:20:59.176205  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:59.176216  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:59 GMT
	I1212 23:20:59.176224  156765 round_trippers.go:580]     Audit-Id: 25e97c1d-7a0a-4768-b744-09fabfde8baa
	I1212 23:20:59.176233  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:59.176241  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:59.176252  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:59.176260  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:59.177162  156765 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"456","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I1212 23:20:59.178847  156765 system_pods.go:86] 8 kube-system pods found
	I1212 23:20:59.178871  156765 system_pods.go:89] "coredns-5dd5756b68-zcxks" [503de693-19d6-45c5-97c6-3b8e5657bfee] Running
	I1212 23:20:59.178879  156765 system_pods.go:89] "etcd-multinode-510563" [2748a67b-24f2-4b90-bf95-eb56755a397a] Running
	I1212 23:20:59.178885  156765 system_pods.go:89] "kindnet-v4js8" [cfe24f85-472c-4ef2-9a48-9e3647cc8feb] Running
	I1212 23:20:59.178891  156765 system_pods.go:89] "kube-apiserver-multinode-510563" [e8a8ed00-d13d-44f0-b7d6-b42bf1342d95] Running
	I1212 23:20:59.178897  156765 system_pods.go:89] "kube-controller-manager-multinode-510563" [efdc7f68-25d6-4f6a-ab8f-1dec43407375] Running
	I1212 23:20:59.178903  156765 system_pods.go:89] "kube-proxy-hspw8" [a2255be6-8705-40cd-8f35-a3e82906190c] Running
	I1212 23:20:59.178910  156765 system_pods.go:89] "kube-scheduler-multinode-510563" [044da73c-9466-4a43-b283-5f4b9cc04df9] Running
	I1212 23:20:59.178916  156765 system_pods.go:89] "storage-provisioner" [cb4f186a-9bb9-488f-8a74-6e01f352fc05] Running
	I1212 23:20:59.178930  156765 system_pods.go:126] duration metric: took 203.180018ms to wait for k8s-apps to be running ...
	I1212 23:20:59.178947  156765 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:20:59.179005  156765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:20:59.192549  156765 system_svc.go:56] duration metric: took 13.590821ms WaitForService to wait for kubelet.
	I1212 23:20:59.192573  156765 kubeadm.go:581] duration metric: took 10.631696318s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:20:59.192594  156765 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:20:59.372033  156765 request.go:629] Waited for 179.357848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes
	I1212 23:20:59.372118  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes
	I1212 23:20:59.372128  156765 round_trippers.go:469] Request Headers:
	I1212 23:20:59.372141  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:20:59.372152  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:20:59.374881  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:20:59.374906  156765 round_trippers.go:577] Response Headers:
	I1212 23:20:59.374919  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:20:59.374926  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:20:59.374934  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:20:59.374941  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:20:59.374950  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:20:59 GMT
	I1212 23:20:59.374958  156765 round_trippers.go:580]     Audit-Id: 2432d21d-09bd-4893-a162-09c8b094ac43
	I1212 23:20:59.375151  156765 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5951 chars]
	I1212 23:20:59.375718  156765 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:20:59.375759  156765 node_conditions.go:123] node cpu capacity is 2
	I1212 23:20:59.375781  156765 node_conditions.go:105] duration metric: took 183.180134ms to run NodePressure ...
	I1212 23:20:59.375799  156765 start.go:228] waiting for startup goroutines ...
	I1212 23:20:59.375809  156765 start.go:233] waiting for cluster config update ...
	I1212 23:20:59.375823  156765 start.go:242] writing updated cluster config ...
	I1212 23:20:59.378632  156765 out.go:177] 
	I1212 23:20:59.380514  156765 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:20:59.380639  156765 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/config.json ...
	I1212 23:20:59.382611  156765 out.go:177] * Starting worker node multinode-510563-m02 in cluster multinode-510563
	I1212 23:20:59.384066  156765 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:20:59.384093  156765 cache.go:56] Caching tarball of preloaded images
	I1212 23:20:59.384204  156765 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:20:59.384222  156765 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 23:20:59.384391  156765 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/config.json ...
	I1212 23:20:59.384634  156765 start.go:365] acquiring machines lock for multinode-510563-m02: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:20:59.384685  156765 start.go:369] acquired machines lock for "multinode-510563-m02" in 29.168µs
	I1212 23:20:59.384704  156765 start.go:93] Provisioning new machine with config: &{Name:multinode-510563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:t
rue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 23:20:59.384802  156765 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1212 23:20:59.386576  156765 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:20:59.386679  156765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:20:59.386727  156765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:20:59.401117  156765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37927
	I1212 23:20:59.401656  156765 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:20:59.402163  156765 main.go:141] libmachine: Using API Version  1
	I1212 23:20:59.402187  156765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:20:59.402482  156765 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:20:59.402695  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetMachineName
	I1212 23:20:59.402870  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:20:59.403083  156765 start.go:159] libmachine.API.Create for "multinode-510563" (driver="kvm2")
	I1212 23:20:59.403115  156765 client.go:168] LocalClient.Create starting
	I1212 23:20:59.403149  156765 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem
	I1212 23:20:59.403189  156765 main.go:141] libmachine: Decoding PEM data...
	I1212 23:20:59.403208  156765 main.go:141] libmachine: Parsing certificate...
	I1212 23:20:59.403274  156765 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem
	I1212 23:20:59.403298  156765 main.go:141] libmachine: Decoding PEM data...
	I1212 23:20:59.403318  156765 main.go:141] libmachine: Parsing certificate...
	I1212 23:20:59.403346  156765 main.go:141] libmachine: Running pre-create checks...
	I1212 23:20:59.403359  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .PreCreateCheck
	I1212 23:20:59.403561  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetConfigRaw
	I1212 23:20:59.403972  156765 main.go:141] libmachine: Creating machine...
	I1212 23:20:59.403988  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .Create
	I1212 23:20:59.404126  156765 main.go:141] libmachine: (multinode-510563-m02) Creating KVM machine...
	I1212 23:20:59.405462  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found existing default KVM network
	I1212 23:20:59.405623  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found existing private KVM network mk-multinode-510563
	I1212 23:20:59.405857  156765 main.go:141] libmachine: (multinode-510563-m02) Setting up store path in /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02 ...
	I1212 23:20:59.405888  156765 main.go:141] libmachine: (multinode-510563-m02) Building disk image from file:///home/jenkins/minikube-integration/17777-136241/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 23:20:59.405947  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:20:59.405825  157157 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:20:59.406037  156765 main.go:141] libmachine: (multinode-510563-m02) Downloading /home/jenkins/minikube-integration/17777-136241/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17777-136241/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 23:20:59.611517  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:20:59.611386  157157 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/id_rsa...
	I1212 23:20:59.814171  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:20:59.814043  157157 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/multinode-510563-m02.rawdisk...
	I1212 23:20:59.814210  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Writing magic tar header
	I1212 23:20:59.814243  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Writing SSH key tar header
	I1212 23:20:59.814257  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:20:59.814156  157157 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02 ...
	I1212 23:20:59.814278  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02
	I1212 23:20:59.814298  156765 main.go:141] libmachine: (multinode-510563-m02) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02 (perms=drwx------)
	I1212 23:20:59.814312  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube/machines
	I1212 23:20:59.814332  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:20:59.814349  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241
	I1212 23:20:59.814371  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 23:20:59.814393  156765 main.go:141] libmachine: (multinode-510563-m02) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube/machines (perms=drwxr-xr-x)
	I1212 23:20:59.814407  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Checking permissions on dir: /home/jenkins
	I1212 23:20:59.814420  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Checking permissions on dir: /home
	I1212 23:20:59.814434  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Skipping /home - not owner
	I1212 23:20:59.814457  156765 main.go:141] libmachine: (multinode-510563-m02) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube (perms=drwxr-xr-x)
	I1212 23:20:59.814481  156765 main.go:141] libmachine: (multinode-510563-m02) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241 (perms=drwxrwxr-x)
	I1212 23:20:59.814496  156765 main.go:141] libmachine: (multinode-510563-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 23:20:59.814511  156765 main.go:141] libmachine: (multinode-510563-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 23:20:59.814526  156765 main.go:141] libmachine: (multinode-510563-m02) Creating domain...
	I1212 23:20:59.815330  156765 main.go:141] libmachine: (multinode-510563-m02) define libvirt domain using xml: 
	I1212 23:20:59.815359  156765 main.go:141] libmachine: (multinode-510563-m02) <domain type='kvm'>
	I1212 23:20:59.815375  156765 main.go:141] libmachine: (multinode-510563-m02)   <name>multinode-510563-m02</name>
	I1212 23:20:59.815389  156765 main.go:141] libmachine: (multinode-510563-m02)   <memory unit='MiB'>2200</memory>
	I1212 23:20:59.815400  156765 main.go:141] libmachine: (multinode-510563-m02)   <vcpu>2</vcpu>
	I1212 23:20:59.815411  156765 main.go:141] libmachine: (multinode-510563-m02)   <features>
	I1212 23:20:59.815425  156765 main.go:141] libmachine: (multinode-510563-m02)     <acpi/>
	I1212 23:20:59.815431  156765 main.go:141] libmachine: (multinode-510563-m02)     <apic/>
	I1212 23:20:59.815438  156765 main.go:141] libmachine: (multinode-510563-m02)     <pae/>
	I1212 23:20:59.815444  156765 main.go:141] libmachine: (multinode-510563-m02)     
	I1212 23:20:59.815461  156765 main.go:141] libmachine: (multinode-510563-m02)   </features>
	I1212 23:20:59.815471  156765 main.go:141] libmachine: (multinode-510563-m02)   <cpu mode='host-passthrough'>
	I1212 23:20:59.815485  156765 main.go:141] libmachine: (multinode-510563-m02)   
	I1212 23:20:59.815495  156765 main.go:141] libmachine: (multinode-510563-m02)   </cpu>
	I1212 23:20:59.815504  156765 main.go:141] libmachine: (multinode-510563-m02)   <os>
	I1212 23:20:59.815522  156765 main.go:141] libmachine: (multinode-510563-m02)     <type>hvm</type>
	I1212 23:20:59.815531  156765 main.go:141] libmachine: (multinode-510563-m02)     <boot dev='cdrom'/>
	I1212 23:20:59.815538  156765 main.go:141] libmachine: (multinode-510563-m02)     <boot dev='hd'/>
	I1212 23:20:59.815547  156765 main.go:141] libmachine: (multinode-510563-m02)     <bootmenu enable='no'/>
	I1212 23:20:59.815556  156765 main.go:141] libmachine: (multinode-510563-m02)   </os>
	I1212 23:20:59.815565  156765 main.go:141] libmachine: (multinode-510563-m02)   <devices>
	I1212 23:20:59.815579  156765 main.go:141] libmachine: (multinode-510563-m02)     <disk type='file' device='cdrom'>
	I1212 23:20:59.815594  156765 main.go:141] libmachine: (multinode-510563-m02)       <source file='/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/boot2docker.iso'/>
	I1212 23:20:59.815604  156765 main.go:141] libmachine: (multinode-510563-m02)       <target dev='hdc' bus='scsi'/>
	I1212 23:20:59.815609  156765 main.go:141] libmachine: (multinode-510563-m02)       <readonly/>
	I1212 23:20:59.815618  156765 main.go:141] libmachine: (multinode-510563-m02)     </disk>
	I1212 23:20:59.815624  156765 main.go:141] libmachine: (multinode-510563-m02)     <disk type='file' device='disk'>
	I1212 23:20:59.815643  156765 main.go:141] libmachine: (multinode-510563-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 23:20:59.815654  156765 main.go:141] libmachine: (multinode-510563-m02)       <source file='/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/multinode-510563-m02.rawdisk'/>
	I1212 23:20:59.815663  156765 main.go:141] libmachine: (multinode-510563-m02)       <target dev='hda' bus='virtio'/>
	I1212 23:20:59.815669  156765 main.go:141] libmachine: (multinode-510563-m02)     </disk>
	I1212 23:20:59.815701  156765 main.go:141] libmachine: (multinode-510563-m02)     <interface type='network'>
	I1212 23:20:59.815730  156765 main.go:141] libmachine: (multinode-510563-m02)       <source network='mk-multinode-510563'/>
	I1212 23:20:59.815745  156765 main.go:141] libmachine: (multinode-510563-m02)       <model type='virtio'/>
	I1212 23:20:59.815759  156765 main.go:141] libmachine: (multinode-510563-m02)     </interface>
	I1212 23:20:59.815774  156765 main.go:141] libmachine: (multinode-510563-m02)     <interface type='network'>
	I1212 23:20:59.815787  156765 main.go:141] libmachine: (multinode-510563-m02)       <source network='default'/>
	I1212 23:20:59.815802  156765 main.go:141] libmachine: (multinode-510563-m02)       <model type='virtio'/>
	I1212 23:20:59.815814  156765 main.go:141] libmachine: (multinode-510563-m02)     </interface>
	I1212 23:20:59.815829  156765 main.go:141] libmachine: (multinode-510563-m02)     <serial type='pty'>
	I1212 23:20:59.815841  156765 main.go:141] libmachine: (multinode-510563-m02)       <target port='0'/>
	I1212 23:20:59.815913  156765 main.go:141] libmachine: (multinode-510563-m02)     </serial>
	I1212 23:20:59.815941  156765 main.go:141] libmachine: (multinode-510563-m02)     <console type='pty'>
	I1212 23:20:59.815955  156765 main.go:141] libmachine: (multinode-510563-m02)       <target type='serial' port='0'/>
	I1212 23:20:59.815973  156765 main.go:141] libmachine: (multinode-510563-m02)     </console>
	I1212 23:20:59.815991  156765 main.go:141] libmachine: (multinode-510563-m02)     <rng model='virtio'>
	I1212 23:20:59.816005  156765 main.go:141] libmachine: (multinode-510563-m02)       <backend model='random'>/dev/random</backend>
	I1212 23:20:59.816016  156765 main.go:141] libmachine: (multinode-510563-m02)     </rng>
	I1212 23:20:59.816025  156765 main.go:141] libmachine: (multinode-510563-m02)     
	I1212 23:20:59.816052  156765 main.go:141] libmachine: (multinode-510563-m02)     
	I1212 23:20:59.816071  156765 main.go:141] libmachine: (multinode-510563-m02)   </devices>
	I1212 23:20:59.816087  156765 main.go:141] libmachine: (multinode-510563-m02) </domain>
	I1212 23:20:59.816100  156765 main.go:141] libmachine: (multinode-510563-m02) 
	I1212 23:20:59.823036  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:b1:3b:67 in network default
	I1212 23:20:59.823594  156765 main.go:141] libmachine: (multinode-510563-m02) Ensuring networks are active...
	I1212 23:20:59.823621  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:20:59.824423  156765 main.go:141] libmachine: (multinode-510563-m02) Ensuring network default is active
	I1212 23:20:59.824774  156765 main.go:141] libmachine: (multinode-510563-m02) Ensuring network mk-multinode-510563 is active
	I1212 23:20:59.825155  156765 main.go:141] libmachine: (multinode-510563-m02) Getting domain xml...
	I1212 23:20:59.825899  156765 main.go:141] libmachine: (multinode-510563-m02) Creating domain...
	I1212 23:21:01.062000  156765 main.go:141] libmachine: (multinode-510563-m02) Waiting to get IP...
	I1212 23:21:01.062804  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:01.063230  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:01.063258  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:01.063204  157157 retry.go:31] will retry after 300.351746ms: waiting for machine to come up
	I1212 23:21:01.365601  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:01.366203  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:01.366231  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:01.366134  157157 retry.go:31] will retry after 277.315269ms: waiting for machine to come up
	I1212 23:21:01.644749  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:01.645255  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:01.645294  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:01.645225  157157 retry.go:31] will retry after 393.912903ms: waiting for machine to come up
	I1212 23:21:02.040782  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:02.041238  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:02.041269  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:02.041190  157157 retry.go:31] will retry after 518.569072ms: waiting for machine to come up
	I1212 23:21:02.561058  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:02.561374  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:02.561412  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:02.561319  157157 retry.go:31] will retry after 523.922674ms: waiting for machine to come up
	I1212 23:21:03.086543  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:03.086981  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:03.087019  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:03.086912  157157 retry.go:31] will retry after 724.099911ms: waiting for machine to come up
	I1212 23:21:03.812223  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:03.812752  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:03.812781  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:03.812701  157157 retry.go:31] will retry after 987.401767ms: waiting for machine to come up
	I1212 23:21:04.801320  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:04.801859  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:04.801898  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:04.801810  157157 retry.go:31] will retry after 1.423394522s: waiting for machine to come up
	I1212 23:21:06.226554  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:06.226933  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:06.226958  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:06.226879  157157 retry.go:31] will retry after 1.540036161s: waiting for machine to come up
	I1212 23:21:07.769569  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:07.770020  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:07.770061  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:07.769945  157157 retry.go:31] will retry after 1.925686797s: waiting for machine to come up
	I1212 23:21:09.697589  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:09.698040  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:09.698074  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:09.697972  157157 retry.go:31] will retry after 2.412192717s: waiting for machine to come up
	I1212 23:21:12.113728  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:12.114209  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:12.114237  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:12.114139  157157 retry.go:31] will retry after 3.122795652s: waiting for machine to come up
	I1212 23:21:15.238823  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:15.239278  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:15.239302  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:15.239232  157157 retry.go:31] will retry after 3.022526518s: waiting for machine to come up
	I1212 23:21:18.263050  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:18.263511  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find current IP address of domain multinode-510563-m02 in network mk-multinode-510563
	I1212 23:21:18.263540  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | I1212 23:21:18.263456  157157 retry.go:31] will retry after 5.28791585s: waiting for machine to come up
	I1212 23:21:23.553664  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:23.554153  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has current primary IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:23.554180  156765 main.go:141] libmachine: (multinode-510563-m02) Found IP for machine: 192.168.39.109
	I1212 23:21:23.554195  156765 main.go:141] libmachine: (multinode-510563-m02) Reserving static IP address...
	I1212 23:21:23.554537  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | unable to find host DHCP lease matching {name: "multinode-510563-m02", mac: "52:54:00:e2:30:41", ip: "192.168.39.109"} in network mk-multinode-510563
	I1212 23:21:23.627004  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Getting to WaitForSSH function...
	I1212 23:21:23.627041  156765 main.go:141] libmachine: (multinode-510563-m02) Reserved static IP address: 192.168.39.109
	I1212 23:21:23.627071  156765 main.go:141] libmachine: (multinode-510563-m02) Waiting for SSH to be available...
	I1212 23:21:23.629418  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:23.629708  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:23.629744  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:23.629885  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Using SSH client type: external
	I1212 23:21:23.629915  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/id_rsa (-rw-------)
	I1212 23:21:23.629955  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:21:23.629974  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | About to run SSH command:
	I1212 23:21:23.629994  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | exit 0
	I1212 23:21:23.720313  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | SSH cmd err, output: <nil>: 
	I1212 23:21:23.720583  156765 main.go:141] libmachine: (multinode-510563-m02) KVM machine creation complete!
	I1212 23:21:23.720867  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetConfigRaw
	I1212 23:21:23.721414  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:21:23.721628  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:21:23.721769  156765 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 23:21:23.721784  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetState
	I1212 23:21:23.723183  156765 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 23:21:23.723199  156765 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 23:21:23.723205  156765 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 23:21:23.723212  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:21:23.725755  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:23.726094  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:23.726114  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:23.726306  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:21:23.726508  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:23.726655  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:23.726772  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:21:23.726924  156765 main.go:141] libmachine: Using SSH client type: native
	I1212 23:21:23.727307  156765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1212 23:21:23.727320  156765 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 23:21:23.839634  156765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:21:23.839672  156765 main.go:141] libmachine: Detecting the provisioner...
	I1212 23:21:23.839697  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:21:23.842711  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:23.842992  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:23.843017  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:23.843172  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:21:23.843351  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:23.843482  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:23.843618  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:21:23.843774  156765 main.go:141] libmachine: Using SSH client type: native
	I1212 23:21:23.844089  156765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1212 23:21:23.844103  156765 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 23:21:23.957465  156765 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 23:21:23.957539  156765 main.go:141] libmachine: found compatible host: buildroot
	I1212 23:21:23.957554  156765 main.go:141] libmachine: Provisioning with buildroot...
	I1212 23:21:23.957570  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetMachineName
	I1212 23:21:23.957806  156765 buildroot.go:166] provisioning hostname "multinode-510563-m02"
	I1212 23:21:23.957829  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetMachineName
	I1212 23:21:23.958090  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:21:23.960688  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:23.961046  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:23.961068  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:23.961202  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:21:23.961407  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:23.961634  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:23.961803  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:21:23.961990  156765 main.go:141] libmachine: Using SSH client type: native
	I1212 23:21:23.962288  156765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1212 23:21:23.962301  156765 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-510563-m02 && echo "multinode-510563-m02" | sudo tee /etc/hostname
	I1212 23:21:24.085779  156765 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-510563-m02
	
	I1212 23:21:24.085812  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:21:24.088853  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.089269  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:24.089305  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.089476  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:21:24.089675  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:24.089816  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:24.089961  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:21:24.090108  156765 main.go:141] libmachine: Using SSH client type: native
	I1212 23:21:24.090503  156765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1212 23:21:24.090529  156765 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-510563-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-510563-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-510563-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:21:24.211956  156765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:21:24.211985  156765 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1212 23:21:24.212005  156765 buildroot.go:174] setting up certificates
	I1212 23:21:24.212018  156765 provision.go:83] configureAuth start
	I1212 23:21:24.212028  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetMachineName
	I1212 23:21:24.212328  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetIP
	I1212 23:21:24.214923  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.215286  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:24.215314  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.215398  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:21:24.217444  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.217745  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:24.217773  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.217888  156765 provision.go:138] copyHostCerts
	I1212 23:21:24.217944  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:21:24.217986  156765 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1212 23:21:24.218003  156765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:21:24.218089  156765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1212 23:21:24.218165  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:21:24.218182  156765 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1212 23:21:24.218188  156765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:21:24.218212  156765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1212 23:21:24.218257  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:21:24.218279  156765 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1212 23:21:24.218289  156765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:21:24.218326  156765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1212 23:21:24.218391  156765 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.multinode-510563-m02 san=[192.168.39.109 192.168.39.109 localhost 127.0.0.1 minikube multinode-510563-m02]
	I1212 23:21:24.444193  156765 provision.go:172] copyRemoteCerts
	I1212 23:21:24.444250  156765 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:21:24.444272  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:21:24.446973  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.447292  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:24.447313  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.447498  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:21:24.447696  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:24.447848  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:21:24.447980  156765 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/id_rsa Username:docker}
	I1212 23:21:24.536123  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 23:21:24.536192  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:21:24.560824  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 23:21:24.560907  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 23:21:24.583341  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 23:21:24.583457  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:21:24.605604  156765 provision.go:86] duration metric: configureAuth took 393.574476ms
	I1212 23:21:24.605632  156765 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:21:24.605795  156765 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:21:24.605863  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:21:24.608647  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.608973  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:24.609003  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.609189  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:21:24.609368  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:24.609556  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:24.609699  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:21:24.609882  156765 main.go:141] libmachine: Using SSH client type: native
	I1212 23:21:24.610193  156765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1212 23:21:24.610209  156765 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:21:24.924699  156765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:21:24.924724  156765 main.go:141] libmachine: Checking connection to Docker...
	I1212 23:21:24.924733  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetURL
	I1212 23:21:24.925956  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | Using libvirt version 6000000
	I1212 23:21:24.927989  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.928307  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:24.928327  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.928531  156765 main.go:141] libmachine: Docker is up and running!
	I1212 23:21:24.928544  156765 main.go:141] libmachine: Reticulating splines...
	I1212 23:21:24.928550  156765 client.go:171] LocalClient.Create took 25.525428375s
	I1212 23:21:24.928572  156765 start.go:167] duration metric: libmachine.API.Create for "multinode-510563" took 25.525493272s
	I1212 23:21:24.928586  156765 start.go:300] post-start starting for "multinode-510563-m02" (driver="kvm2")
	I1212 23:21:24.928600  156765 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:21:24.928623  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:21:24.928861  156765 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:21:24.928895  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:21:24.930953  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.931354  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:24.931386  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:24.931494  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:21:24.931696  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:24.931879  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:21:24.932054  156765 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/id_rsa Username:docker}
	I1212 23:21:25.019989  156765 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:21:25.024449  156765 command_runner.go:130] > NAME=Buildroot
	I1212 23:21:25.024473  156765 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 23:21:25.024479  156765 command_runner.go:130] > ID=buildroot
	I1212 23:21:25.024488  156765 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:21:25.024497  156765 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:21:25.024538  156765 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:21:25.024556  156765 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1212 23:21:25.024631  156765 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1212 23:21:25.024759  156765 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1212 23:21:25.024776  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> /etc/ssl/certs/1435412.pem
	I1212 23:21:25.024907  156765 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:21:25.035196  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:21:25.058929  156765 start.go:303] post-start completed in 130.326324ms
	I1212 23:21:25.058982  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetConfigRaw
	I1212 23:21:25.059558  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetIP
	I1212 23:21:25.061956  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:25.062318  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:25.062343  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:25.062602  156765 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/config.json ...
	I1212 23:21:25.062825  156765 start.go:128] duration metric: createHost completed in 25.678010636s
	I1212 23:21:25.062851  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:21:25.065247  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:25.065646  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:25.065677  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:25.065843  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:21:25.066056  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:25.066210  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:25.066360  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:21:25.066512  156765 main.go:141] libmachine: Using SSH client type: native
	I1212 23:21:25.066844  156765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1212 23:21:25.066859  156765 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:21:25.181202  156765 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423285.152640785
	
	I1212 23:21:25.181237  156765 fix.go:206] guest clock: 1702423285.152640785
	I1212 23:21:25.181246  156765 fix.go:219] Guest: 2023-12-12 23:21:25.152640785 +0000 UTC Remote: 2023-12-12 23:21:25.062839762 +0000 UTC m=+95.500208542 (delta=89.801023ms)
	I1212 23:21:25.181268  156765 fix.go:190] guest clock delta is within tolerance: 89.801023ms
	I1212 23:21:25.181278  156765 start.go:83] releasing machines lock for "multinode-510563-m02", held for 25.796584966s
	I1212 23:21:25.181308  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:21:25.181577  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetIP
	I1212 23:21:25.184294  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:25.184735  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:25.184762  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:25.187163  156765 out.go:177] * Found network options:
	I1212 23:21:25.188842  156765 out.go:177]   - NO_PROXY=192.168.39.38
	W1212 23:21:25.190380  156765 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:21:25.190437  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:21:25.191152  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:21:25.191363  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:21:25.191457  156765 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:21:25.191494  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	W1212 23:21:25.191563  156765 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:21:25.191647  156765 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:21:25.191675  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:21:25.194412  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:25.194534  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:25.194779  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:25.194810  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:25.194980  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:21:25.195052  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:25.195082  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:25.195209  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:25.195411  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:21:25.195414  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:21:25.195593  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:21:25.195597  156765 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/id_rsa Username:docker}
	I1212 23:21:25.195772  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:21:25.196024  156765 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/id_rsa Username:docker}
	I1212 23:21:25.434662  156765 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:21:25.434709  156765 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:21:25.441509  156765 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 23:21:25.441748  156765 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:21:25.441816  156765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:21:25.456909  156765 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:21:25.457380  156765 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:21:25.457403  156765 start.go:475] detecting cgroup driver to use...
	I1212 23:21:25.457477  156765 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:21:25.471633  156765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:21:25.483816  156765 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:21:25.483883  156765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:21:25.496554  156765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:21:25.509488  156765 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:21:25.618408  156765 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1212 23:21:25.618491  156765 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:21:25.633030  156765 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1212 23:21:25.745191  156765 docker.go:219] disabling docker service ...
	I1212 23:21:25.745272  156765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:21:25.758997  156765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:21:25.769593  156765 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1212 23:21:25.770049  156765 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:21:25.783556  156765 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1212 23:21:25.900799  156765 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:21:26.019393  156765 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1212 23:21:26.019432  156765 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1212 23:21:26.019507  156765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:21:26.033574  156765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:21:26.050522  156765 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 23:21:26.050565  156765 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:21:26.050619  156765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:21:26.060050  156765 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:21:26.060116  156765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:21:26.068955  156765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:21:26.077738  156765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:21:26.086680  156765 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:21:26.095790  156765 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:21:26.104176  156765 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:21:26.104245  156765 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:21:26.104300  156765 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:21:26.116322  156765 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:21:26.126443  156765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:21:26.257211  156765 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:21:26.419087  156765 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:21:26.419163  156765 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:21:26.424648  156765 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 23:21:26.424674  156765 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 23:21:26.424685  156765 command_runner.go:130] > Device: 16h/22d	Inode: 742         Links: 1
	I1212 23:21:26.424697  156765 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:21:26.424702  156765 command_runner.go:130] > Access: 2023-12-12 23:21:26.379052882 +0000
	I1212 23:21:26.424708  156765 command_runner.go:130] > Modify: 2023-12-12 23:21:26.379052882 +0000
	I1212 23:21:26.424713  156765 command_runner.go:130] > Change: 2023-12-12 23:21:26.379052882 +0000
	I1212 23:21:26.424717  156765 command_runner.go:130] >  Birth: -
	I1212 23:21:26.424898  156765 start.go:543] Will wait 60s for crictl version
	I1212 23:21:26.424959  156765 ssh_runner.go:195] Run: which crictl
	I1212 23:21:26.430122  156765 command_runner.go:130] > /usr/bin/crictl
	I1212 23:21:26.430601  156765 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:21:26.467028  156765 command_runner.go:130] > Version:  0.1.0
	I1212 23:21:26.467053  156765 command_runner.go:130] > RuntimeName:  cri-o
	I1212 23:21:26.467058  156765 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 23:21:26.467064  156765 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 23:21:26.468351  156765 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:21:26.468448  156765 ssh_runner.go:195] Run: crio --version
	I1212 23:21:26.512170  156765 command_runner.go:130] > crio version 1.24.1
	I1212 23:21:26.512196  156765 command_runner.go:130] > Version:          1.24.1
	I1212 23:21:26.512206  156765 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 23:21:26.512211  156765 command_runner.go:130] > GitTreeState:     dirty
	I1212 23:21:26.512221  156765 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 23:21:26.512228  156765 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 23:21:26.512234  156765 command_runner.go:130] > Compiler:         gc
	I1212 23:21:26.512242  156765 command_runner.go:130] > Platform:         linux/amd64
	I1212 23:21:26.512254  156765 command_runner.go:130] > Linkmode:         dynamic
	I1212 23:21:26.512267  156765 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 23:21:26.512278  156765 command_runner.go:130] > SeccompEnabled:   true
	I1212 23:21:26.512286  156765 command_runner.go:130] > AppArmorEnabled:  false
	I1212 23:21:26.512362  156765 ssh_runner.go:195] Run: crio --version
	I1212 23:21:26.557991  156765 command_runner.go:130] > crio version 1.24.1
	I1212 23:21:26.558015  156765 command_runner.go:130] > Version:          1.24.1
	I1212 23:21:26.558025  156765 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 23:21:26.558031  156765 command_runner.go:130] > GitTreeState:     dirty
	I1212 23:21:26.558046  156765 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 23:21:26.558053  156765 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 23:21:26.558059  156765 command_runner.go:130] > Compiler:         gc
	I1212 23:21:26.558066  156765 command_runner.go:130] > Platform:         linux/amd64
	I1212 23:21:26.558074  156765 command_runner.go:130] > Linkmode:         dynamic
	I1212 23:21:26.558092  156765 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 23:21:26.558104  156765 command_runner.go:130] > SeccompEnabled:   true
	I1212 23:21:26.558112  156765 command_runner.go:130] > AppArmorEnabled:  false
	I1212 23:21:26.561292  156765 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:21:26.562787  156765 out.go:177]   - env NO_PROXY=192.168.39.38
	I1212 23:21:26.564118  156765 main.go:141] libmachine: (multinode-510563-m02) Calling .GetIP
	I1212 23:21:26.566977  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:26.567295  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:21:26.567324  156765 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:21:26.567512  156765 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 23:21:26.571681  156765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:21:26.584837  156765 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563 for IP: 192.168.39.109
	I1212 23:21:26.584878  156765 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:21:26.585030  156765 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1212 23:21:26.585080  156765 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1212 23:21:26.585097  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:21:26.585121  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:21:26.585139  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:21:26.585156  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:21:26.585221  156765 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1212 23:21:26.585261  156765 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1212 23:21:26.585276  156765 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:21:26.585311  156765 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:21:26.585345  156765 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:21:26.585377  156765 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1212 23:21:26.585438  156765 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:21:26.585474  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> /usr/share/ca-certificates/1435412.pem
	I1212 23:21:26.585494  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:21:26.585520  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem -> /usr/share/ca-certificates/143541.pem
	I1212 23:21:26.585860  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:21:26.611469  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:21:26.634591  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:21:26.657837  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:21:26.686986  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1212 23:21:26.710680  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:21:26.733414  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1212 23:21:26.756924  156765 ssh_runner.go:195] Run: openssl version
	I1212 23:21:26.762242  156765 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 23:21:26.762619  156765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1212 23:21:26.772819  156765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1212 23:21:26.777264  156765 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1212 23:21:26.777335  156765 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1212 23:21:26.777397  156765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1212 23:21:26.782734  156765 command_runner.go:130] > 3ec20f2e
	I1212 23:21:26.783037  156765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:21:26.793256  156765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:21:26.802916  156765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:21:26.807331  156765 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:21:26.807446  156765 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:21:26.807492  156765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:21:26.812818  156765 command_runner.go:130] > b5213941
	I1212 23:21:26.813758  156765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:21:26.823829  156765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1212 23:21:26.833540  156765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1212 23:21:26.837619  156765 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1212 23:21:26.837882  156765 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1212 23:21:26.837945  156765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1212 23:21:26.843099  156765 command_runner.go:130] > 51391683
	I1212 23:21:26.843316  156765 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1212 23:21:26.853062  156765 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:21:26.857225  156765 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:21:26.857258  156765 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:21:26.857354  156765 ssh_runner.go:195] Run: crio config
	I1212 23:21:26.924215  156765 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 23:21:26.924245  156765 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 23:21:26.924257  156765 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 23:21:26.924263  156765 command_runner.go:130] > #
	I1212 23:21:26.924272  156765 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 23:21:26.924278  156765 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 23:21:26.924285  156765 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 23:21:26.924295  156765 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 23:21:26.924299  156765 command_runner.go:130] > # reload'.
	I1212 23:21:26.924305  156765 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 23:21:26.924311  156765 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 23:21:26.924318  156765 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 23:21:26.924327  156765 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 23:21:26.924338  156765 command_runner.go:130] > [crio]
	I1212 23:21:26.924348  156765 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 23:21:26.924357  156765 command_runner.go:130] > # containers images, in this directory.
	I1212 23:21:26.924380  156765 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 23:21:26.924391  156765 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 23:21:26.924396  156765 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 23:21:26.924402  156765 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 23:21:26.924410  156765 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 23:21:26.924661  156765 command_runner.go:130] > storage_driver = "overlay"
	I1212 23:21:26.924677  156765 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 23:21:26.924688  156765 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 23:21:26.924696  156765 command_runner.go:130] > storage_option = [
	I1212 23:21:26.924905  156765 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 23:21:26.924943  156765 command_runner.go:130] > ]
	I1212 23:21:26.924955  156765 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 23:21:26.924965  156765 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 23:21:26.925354  156765 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 23:21:26.925369  156765 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 23:21:26.925378  156765 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 23:21:26.925386  156765 command_runner.go:130] > # always happen on a node reboot
	I1212 23:21:26.925851  156765 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 23:21:26.925865  156765 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 23:21:26.925875  156765 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 23:21:26.925891  156765 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 23:21:26.926271  156765 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 23:21:26.926289  156765 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 23:21:26.926302  156765 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 23:21:26.926683  156765 command_runner.go:130] > # internal_wipe = true
	I1212 23:21:26.926699  156765 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 23:21:26.926709  156765 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 23:21:26.926718  156765 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 23:21:26.927078  156765 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 23:21:26.927098  156765 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 23:21:26.927105  156765 command_runner.go:130] > [crio.api]
	I1212 23:21:26.927114  156765 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 23:21:26.927136  156765 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 23:21:26.927149  156765 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 23:21:26.927157  156765 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 23:21:26.927171  156765 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 23:21:26.927182  156765 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 23:21:26.927310  156765 command_runner.go:130] > # stream_port = "0"
	I1212 23:21:26.927325  156765 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 23:21:26.927330  156765 command_runner.go:130] > # stream_enable_tls = false
	I1212 23:21:26.927336  156765 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 23:21:26.927355  156765 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 23:21:26.927364  156765 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 23:21:26.927370  156765 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 23:21:26.927376  156765 command_runner.go:130] > # minutes.
	I1212 23:21:26.927381  156765 command_runner.go:130] > # stream_tls_cert = ""
	I1212 23:21:26.927389  156765 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 23:21:26.927396  156765 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 23:21:26.927402  156765 command_runner.go:130] > # stream_tls_key = ""
	I1212 23:21:26.927408  156765 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 23:21:26.927416  156765 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 23:21:26.927424  156765 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 23:21:26.927430  156765 command_runner.go:130] > # stream_tls_ca = ""
	I1212 23:21:26.927438  156765 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 23:21:26.927445  156765 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 23:21:26.927456  156765 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 23:21:26.927462  156765 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 23:21:26.927474  156765 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 23:21:26.927482  156765 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 23:21:26.927487  156765 command_runner.go:130] > [crio.runtime]
	I1212 23:21:26.927495  156765 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 23:21:26.927501  156765 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 23:21:26.927507  156765 command_runner.go:130] > # "nofile=1024:2048"
	I1212 23:21:26.927513  156765 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 23:21:26.927528  156765 command_runner.go:130] > # default_ulimits = [
	I1212 23:21:26.927535  156765 command_runner.go:130] > # ]
	I1212 23:21:26.927541  156765 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 23:21:26.927556  156765 command_runner.go:130] > # no_pivot = false
	I1212 23:21:26.927564  156765 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 23:21:26.927570  156765 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 23:21:26.927575  156765 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 23:21:26.927583  156765 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 23:21:26.927588  156765 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 23:21:26.927596  156765 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 23:21:26.927601  156765 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 23:21:26.927605  156765 command_runner.go:130] > # Cgroup setting for conmon
	I1212 23:21:26.927614  156765 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 23:21:26.927619  156765 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 23:21:26.927625  156765 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 23:21:26.927633  156765 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 23:21:26.927639  156765 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 23:21:26.927645  156765 command_runner.go:130] > conmon_env = [
	I1212 23:21:26.927773  156765 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 23:21:26.927787  156765 command_runner.go:130] > ]
	I1212 23:21:26.927796  156765 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 23:21:26.927805  156765 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 23:21:26.927815  156765 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 23:21:26.927827  156765 command_runner.go:130] > # default_env = [
	I1212 23:21:26.927836  156765 command_runner.go:130] > # ]
	I1212 23:21:26.927845  156765 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 23:21:26.927855  156765 command_runner.go:130] > # selinux = false
	I1212 23:21:26.927867  156765 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 23:21:26.927880  156765 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 23:21:26.927892  156765 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 23:21:26.927915  156765 command_runner.go:130] > # seccomp_profile = ""
	I1212 23:21:26.927929  156765 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 23:21:26.927942  156765 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 23:21:26.927955  156765 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 23:21:26.927966  156765 command_runner.go:130] > # which might increase security.
	I1212 23:21:26.927975  156765 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 23:21:26.927989  156765 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 23:21:26.928002  156765 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 23:21:26.928020  156765 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 23:21:26.928037  156765 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 23:21:26.928048  156765 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:21:26.928060  156765 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 23:21:26.928072  156765 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 23:21:26.928083  156765 command_runner.go:130] > # the cgroup blockio controller.
	I1212 23:21:26.928091  156765 command_runner.go:130] > # blockio_config_file = ""
	I1212 23:21:26.928105  156765 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 23:21:26.928115  156765 command_runner.go:130] > # irqbalance daemon.
	I1212 23:21:26.928125  156765 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 23:21:26.928138  156765 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 23:21:26.928150  156765 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:21:26.928161  156765 command_runner.go:130] > # rdt_config_file = ""
	I1212 23:21:26.928172  156765 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 23:21:26.928182  156765 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 23:21:26.928194  156765 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 23:21:26.928218  156765 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 23:21:26.928232  156765 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 23:21:26.928246  156765 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 23:21:26.928256  156765 command_runner.go:130] > # will be added.
	I1212 23:21:26.928263  156765 command_runner.go:130] > # default_capabilities = [
	I1212 23:21:26.928274  156765 command_runner.go:130] > # 	"CHOWN",
	I1212 23:21:26.928285  156765 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 23:21:26.928292  156765 command_runner.go:130] > # 	"FSETID",
	I1212 23:21:26.928301  156765 command_runner.go:130] > # 	"FOWNER",
	I1212 23:21:26.928308  156765 command_runner.go:130] > # 	"SETGID",
	I1212 23:21:26.928317  156765 command_runner.go:130] > # 	"SETUID",
	I1212 23:21:26.928327  156765 command_runner.go:130] > # 	"SETPCAP",
	I1212 23:21:26.928335  156765 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 23:21:26.928343  156765 command_runner.go:130] > # 	"KILL",
	I1212 23:21:26.928349  156765 command_runner.go:130] > # ]
	I1212 23:21:26.928363  156765 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 23:21:26.928376  156765 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 23:21:26.928386  156765 command_runner.go:130] > # default_sysctls = [
	I1212 23:21:26.928392  156765 command_runner.go:130] > # ]
	I1212 23:21:26.928400  156765 command_runner.go:130] > # List of devices on the host that a
	I1212 23:21:26.928414  156765 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 23:21:26.928424  156765 command_runner.go:130] > # allowed_devices = [
	I1212 23:21:26.928462  156765 command_runner.go:130] > # 	"/dev/fuse",
	I1212 23:21:26.928473  156765 command_runner.go:130] > # ]
	I1212 23:21:26.928484  156765 command_runner.go:130] > # List of additional devices. specified as
	I1212 23:21:26.928500  156765 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 23:21:26.928510  156765 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 23:21:26.928530  156765 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 23:21:26.928541  156765 command_runner.go:130] > # additional_devices = [
	I1212 23:21:26.928547  156765 command_runner.go:130] > # ]
	I1212 23:21:26.928559  156765 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 23:21:26.928569  156765 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 23:21:26.928576  156765 command_runner.go:130] > # 	"/etc/cdi",
	I1212 23:21:26.928584  156765 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 23:21:26.928593  156765 command_runner.go:130] > # ]
	I1212 23:21:26.928604  156765 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 23:21:26.928617  156765 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 23:21:26.928627  156765 command_runner.go:130] > # Defaults to false.
	I1212 23:21:26.928637  156765 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 23:21:26.928651  156765 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 23:21:26.928664  156765 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 23:21:26.928675  156765 command_runner.go:130] > # hooks_dir = [
	I1212 23:21:26.928686  156765 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 23:21:26.928695  156765 command_runner.go:130] > # ]
	I1212 23:21:26.928709  156765 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 23:21:26.928722  156765 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 23:21:26.928732  156765 command_runner.go:130] > # its default mounts from the following two files:
	I1212 23:21:26.928741  156765 command_runner.go:130] > #
	I1212 23:21:26.928754  156765 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 23:21:26.928768  156765 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 23:21:26.928780  156765 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 23:21:26.928789  156765 command_runner.go:130] > #
	I1212 23:21:26.928800  156765 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 23:21:26.928813  156765 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 23:21:26.928827  156765 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 23:21:26.928837  156765 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 23:21:26.928845  156765 command_runner.go:130] > #
	I1212 23:21:26.928878  156765 command_runner.go:130] > # default_mounts_file = ""
	I1212 23:21:26.928892  156765 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 23:21:26.928904  156765 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 23:21:26.928914  156765 command_runner.go:130] > pids_limit = 1024
	I1212 23:21:26.928925  156765 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 23:21:26.928938  156765 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 23:21:26.928950  156765 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 23:21:26.928964  156765 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 23:21:26.928973  156765 command_runner.go:130] > # log_size_max = -1
	I1212 23:21:26.928985  156765 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 23:21:26.928996  156765 command_runner.go:130] > # log_to_journald = false
	I1212 23:21:26.929006  156765 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 23:21:26.929021  156765 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 23:21:26.929033  156765 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 23:21:26.929045  156765 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 23:21:26.929058  156765 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 23:21:26.929069  156765 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 23:21:26.929081  156765 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 23:21:26.929108  156765 command_runner.go:130] > # read_only = false
	I1212 23:21:26.929120  156765 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 23:21:26.929131  156765 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 23:21:26.929142  156765 command_runner.go:130] > # live configuration reload.
	I1212 23:21:26.929151  156765 command_runner.go:130] > # log_level = "info"
	I1212 23:21:26.929163  156765 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 23:21:26.929172  156765 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:21:26.929183  156765 command_runner.go:130] > # log_filter = ""
	I1212 23:21:26.929195  156765 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 23:21:26.929208  156765 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 23:21:26.929218  156765 command_runner.go:130] > # separated by comma.
	I1212 23:21:26.929227  156765 command_runner.go:130] > # uid_mappings = ""
	I1212 23:21:26.929235  156765 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 23:21:26.929245  156765 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 23:21:26.929250  156765 command_runner.go:130] > # separated by comma.
	I1212 23:21:26.929257  156765 command_runner.go:130] > # gid_mappings = ""
	I1212 23:21:26.929265  156765 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 23:21:26.929278  156765 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 23:21:26.929287  156765 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 23:21:26.929299  156765 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 23:21:26.929310  156765 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 23:21:26.929324  156765 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 23:21:26.929337  156765 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 23:21:26.929363  156765 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 23:21:26.929377  156765 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 23:21:26.929387  156765 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 23:21:26.929400  156765 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 23:21:26.929411  156765 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 23:21:26.929424  156765 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 23:21:26.929436  156765 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 23:21:26.929447  156765 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 23:21:26.929459  156765 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 23:21:26.929465  156765 command_runner.go:130] > drop_infra_ctr = false
	I1212 23:21:26.929475  156765 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 23:21:26.929483  156765 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 23:21:26.929492  156765 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 23:21:26.929496  156765 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 23:21:26.929505  156765 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 23:21:26.929510  156765 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 23:21:26.929515  156765 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 23:21:26.929522  156765 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 23:21:26.929529  156765 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 23:21:26.929536  156765 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 23:21:26.929544  156765 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 23:21:26.929553  156765 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 23:21:26.929563  156765 command_runner.go:130] > # default_runtime = "runc"
	I1212 23:21:26.929572  156765 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 23:21:26.929585  156765 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 23:21:26.929603  156765 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 23:21:26.929611  156765 command_runner.go:130] > # creation as a file is not desired either.
	I1212 23:21:26.929624  156765 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 23:21:26.929634  156765 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 23:21:26.929640  156765 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 23:21:26.929650  156765 command_runner.go:130] > # ]
	I1212 23:21:26.929660  156765 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 23:21:26.929672  156765 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 23:21:26.929684  156765 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 23:21:26.929695  156765 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 23:21:26.929704  156765 command_runner.go:130] > #
	I1212 23:21:26.929715  156765 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 23:21:26.929726  156765 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 23:21:26.929736  156765 command_runner.go:130] > #  runtime_type = "oci"
	I1212 23:21:26.929743  156765 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 23:21:26.929754  156765 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 23:21:26.929761  156765 command_runner.go:130] > #  allowed_annotations = []
	I1212 23:21:26.929770  156765 command_runner.go:130] > # Where:
	I1212 23:21:26.929777  156765 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 23:21:26.929789  156765 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 23:21:26.929801  156765 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 23:21:26.929814  156765 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 23:21:26.929826  156765 command_runner.go:130] > #   in $PATH.
	I1212 23:21:26.929836  156765 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 23:21:26.929844  156765 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 23:21:26.929857  156765 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 23:21:26.929863  156765 command_runner.go:130] > #   state.
	I1212 23:21:26.929871  156765 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 23:21:26.929876  156765 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 23:21:26.929883  156765 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 23:21:26.929889  156765 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 23:21:26.929897  156765 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 23:21:26.929904  156765 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 23:21:26.929912  156765 command_runner.go:130] > #   The currently recognized values are:
	I1212 23:21:26.929918  156765 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 23:21:26.929927  156765 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 23:21:26.929936  156765 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 23:21:26.929942  156765 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 23:21:26.929949  156765 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 23:21:26.929958  156765 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 23:21:26.929964  156765 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 23:21:26.929994  156765 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 23:21:26.930007  156765 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 23:21:26.930020  156765 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 23:21:26.930031  156765 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 23:21:26.930038  156765 command_runner.go:130] > runtime_type = "oci"
	I1212 23:21:26.930047  156765 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 23:21:26.930054  156765 command_runner.go:130] > runtime_config_path = ""
	I1212 23:21:26.930062  156765 command_runner.go:130] > monitor_path = ""
	I1212 23:21:26.930072  156765 command_runner.go:130] > monitor_cgroup = ""
	I1212 23:21:26.930080  156765 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 23:21:26.930093  156765 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 23:21:26.930103  156765 command_runner.go:130] > # running containers
	I1212 23:21:26.930110  156765 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 23:21:26.930119  156765 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 23:21:26.930163  156765 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 23:21:26.930172  156765 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 23:21:26.930181  156765 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 23:21:26.930191  156765 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 23:21:26.930202  156765 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 23:21:26.930211  156765 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 23:21:26.930223  156765 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 23:21:26.930235  156765 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 23:21:26.930248  156765 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 23:21:26.930261  156765 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 23:21:26.930275  156765 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 23:21:26.930289  156765 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 23:21:26.930306  156765 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 23:21:26.930318  156765 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 23:21:26.930333  156765 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 23:21:26.930348  156765 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 23:21:26.930355  156765 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 23:21:26.930362  156765 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 23:21:26.930367  156765 command_runner.go:130] > # Example:
	I1212 23:21:26.930372  156765 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 23:21:26.930377  156765 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 23:21:26.930384  156765 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 23:21:26.930391  156765 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 23:21:26.930400  156765 command_runner.go:130] > # cpuset = 0
	I1212 23:21:26.930407  156765 command_runner.go:130] > # cpushares = "0-1"
	I1212 23:21:26.930416  156765 command_runner.go:130] > # Where:
	I1212 23:21:26.930425  156765 command_runner.go:130] > # The workload name is workload-type.
	I1212 23:21:26.930439  156765 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 23:21:26.930451  156765 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 23:21:26.930464  156765 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 23:21:26.930476  156765 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 23:21:26.930490  156765 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 23:21:26.930497  156765 command_runner.go:130] > # 
	I1212 23:21:26.930508  156765 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 23:21:26.930517  156765 command_runner.go:130] > #
	I1212 23:21:26.930526  156765 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 23:21:26.930539  156765 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 23:21:26.930553  156765 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 23:21:26.930566  156765 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 23:21:26.930576  156765 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 23:21:26.930586  156765 command_runner.go:130] > [crio.image]
	I1212 23:21:26.930597  156765 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 23:21:26.930608  156765 command_runner.go:130] > # default_transport = "docker://"
	I1212 23:21:26.930620  156765 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 23:21:26.930632  156765 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 23:21:26.930641  156765 command_runner.go:130] > # global_auth_file = ""
	I1212 23:21:26.930652  156765 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 23:21:26.930662  156765 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:21:26.930674  156765 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 23:21:26.930685  156765 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 23:21:26.930698  156765 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 23:21:26.930709  156765 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:21:26.930717  156765 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 23:21:26.930726  156765 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 23:21:26.930735  156765 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 23:21:26.930749  156765 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 23:21:26.930759  156765 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 23:21:26.930770  156765 command_runner.go:130] > # pause_command = "/pause"
	I1212 23:21:26.930781  156765 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 23:21:26.930820  156765 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 23:21:26.930831  156765 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 23:21:26.930848  156765 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 23:21:26.930857  156765 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 23:21:26.930865  156765 command_runner.go:130] > # signature_policy = ""
	I1212 23:21:26.930878  156765 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 23:21:26.930892  156765 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 23:21:26.930901  156765 command_runner.go:130] > # changing them here.
	I1212 23:21:26.930909  156765 command_runner.go:130] > # insecure_registries = [
	I1212 23:21:26.930918  156765 command_runner.go:130] > # ]
	I1212 23:21:26.930929  156765 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 23:21:26.930941  156765 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 23:21:26.930949  156765 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 23:21:26.930961  156765 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 23:21:26.930972  156765 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 23:21:26.930984  156765 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 23:21:26.930994  156765 command_runner.go:130] > # CNI plugins.
	I1212 23:21:26.931003  156765 command_runner.go:130] > [crio.network]
	I1212 23:21:26.931015  156765 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 23:21:26.931027  156765 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 23:21:26.931039  156765 command_runner.go:130] > # cni_default_network = ""
	I1212 23:21:26.931052  156765 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 23:21:26.931062  156765 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 23:21:26.931073  156765 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 23:21:26.931082  156765 command_runner.go:130] > # plugin_dirs = [
	I1212 23:21:26.931089  156765 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 23:21:26.931098  156765 command_runner.go:130] > # ]
	I1212 23:21:26.931105  156765 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 23:21:26.931113  156765 command_runner.go:130] > [crio.metrics]
	I1212 23:21:26.931122  156765 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 23:21:26.931132  156765 command_runner.go:130] > enable_metrics = true
	I1212 23:21:26.931141  156765 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 23:21:26.931152  156765 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 23:21:26.931166  156765 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 23:21:26.931179  156765 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 23:21:26.931192  156765 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 23:21:26.931201  156765 command_runner.go:130] > # metrics_collectors = [
	I1212 23:21:26.931207  156765 command_runner.go:130] > # 	"operations",
	I1212 23:21:26.931215  156765 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 23:21:26.931227  156765 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 23:21:26.931237  156765 command_runner.go:130] > # 	"operations_errors",
	I1212 23:21:26.931245  156765 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 23:21:26.931257  156765 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 23:21:26.931268  156765 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 23:21:26.931278  156765 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 23:21:26.931289  156765 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 23:21:26.931299  156765 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 23:21:26.931308  156765 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 23:21:26.931316  156765 command_runner.go:130] > # 	"containers_oom_total",
	I1212 23:21:26.931327  156765 command_runner.go:130] > # 	"containers_oom",
	I1212 23:21:26.931337  156765 command_runner.go:130] > # 	"processes_defunct",
	I1212 23:21:26.931344  156765 command_runner.go:130] > # 	"operations_total",
	I1212 23:21:26.931355  156765 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 23:21:26.931366  156765 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 23:21:26.931376  156765 command_runner.go:130] > # 	"operations_errors_total",
	I1212 23:21:26.931387  156765 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 23:21:26.931400  156765 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 23:21:26.931410  156765 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 23:21:26.931421  156765 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 23:21:26.931432  156765 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 23:21:26.931443  156765 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 23:21:26.931451  156765 command_runner.go:130] > # ]
	I1212 23:21:26.931463  156765 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 23:21:26.931473  156765 command_runner.go:130] > # metrics_port = 9090
	I1212 23:21:26.931485  156765 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 23:21:26.931493  156765 command_runner.go:130] > # metrics_socket = ""
	I1212 23:21:26.931503  156765 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 23:21:26.931516  156765 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 23:21:26.931529  156765 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 23:21:26.931541  156765 command_runner.go:130] > # certificate on any modification event.
	I1212 23:21:26.931551  156765 command_runner.go:130] > # metrics_cert = ""
	I1212 23:21:26.931562  156765 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 23:21:26.931573  156765 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 23:21:26.931583  156765 command_runner.go:130] > # metrics_key = ""
	I1212 23:21:26.931600  156765 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 23:21:26.931608  156765 command_runner.go:130] > [crio.tracing]
	I1212 23:21:26.931621  156765 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 23:21:26.931631  156765 command_runner.go:130] > # enable_tracing = false
	I1212 23:21:26.931640  156765 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 23:21:26.931651  156765 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 23:21:26.931660  156765 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 23:21:26.931671  156765 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 23:21:26.931685  156765 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 23:21:26.931694  156765 command_runner.go:130] > [crio.stats]
	I1212 23:21:26.931701  156765 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 23:21:26.931710  156765 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 23:21:26.931718  156765 command_runner.go:130] > # stats_collection_period = 0
	I1212 23:21:26.932034  156765 command_runner.go:130] ! time="2023-12-12 23:21:26.895782020Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 23:21:26.932058  156765 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 23:21:26.932137  156765 cni.go:84] Creating CNI manager for ""
	I1212 23:21:26.932148  156765 cni.go:136] 2 nodes found, recommending kindnet
	I1212 23:21:26.932161  156765 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:21:26.932193  156765 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-510563 NodeName:multinode-510563-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:21:26.932333  156765 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-510563-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:21:26.932394  156765 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-510563-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:21:26.932467  156765 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:21:26.941426  156765 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I1212 23:21:26.941661  156765 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I1212 23:21:26.941716  156765 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I1212 23:21:26.950099  156765 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I1212 23:21:26.950119  156765 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I1212 23:21:26.950125  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I1212 23:21:26.950146  156765 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I1212 23:21:26.950206  156765 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I1212 23:21:26.954194  156765 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1212 23:21:26.954332  156765 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1212 23:21:26.954352  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I1212 23:21:27.979926  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1212 23:21:27.980019  156765 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1212 23:21:27.985441  156765 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1212 23:21:27.985515  156765 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1212 23:21:27.985541  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I1212 23:21:28.414324  156765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:21:28.427735  156765 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I1212 23:21:28.427837  156765 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I1212 23:21:28.431872  156765 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1212 23:21:28.431976  156765 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1212 23:21:28.432011  156765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I1212 23:21:28.951353  156765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 23:21:28.960700  156765 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1212 23:21:28.976080  156765 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:21:28.991302  156765 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I1212 23:21:28.995038  156765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:21:29.006037  156765 host.go:66] Checking if "multinode-510563" exists ...
	I1212 23:21:29.006299  156765 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:21:29.006419  156765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:21:29.006459  156765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:21:29.020817  156765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I1212 23:21:29.021307  156765 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:21:29.021747  156765 main.go:141] libmachine: Using API Version  1
	I1212 23:21:29.021776  156765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:21:29.022147  156765 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:21:29.022331  156765 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:21:29.022458  156765 start.go:304] JoinCluster: &{Name:multinode-510563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:21:29.022561  156765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 23:21:29.022578  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:21:29.025404  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:21:29.025771  156765 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:21:29.025799  156765 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:21:29.025938  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:21:29.026173  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:21:29.026326  156765 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:21:29.026468  156765 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:21:29.222547  156765 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token yp9z5a.a9e8oxd339im8yud --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1212 23:21:29.224927  156765 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 23:21:29.224978  156765 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yp9z5a.a9e8oxd339im8yud --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-510563-m02"
	I1212 23:21:29.270441  156765 command_runner.go:130] ! W1212 23:21:29.247835     823 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1212 23:21:29.405009  156765 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:21:32.113314  156765 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 23:21:32.113346  156765 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1212 23:21:32.113360  156765 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1212 23:21:32.113378  156765 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:21:32.113391  156765 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:21:32.113402  156765 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 23:21:32.113412  156765 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1212 23:21:32.113425  156765 command_runner.go:130] > This node has joined the cluster:
	I1212 23:21:32.113445  156765 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1212 23:21:32.113457  156765 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1212 23:21:32.113468  156765 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1212 23:21:32.113499  156765 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yp9z5a.a9e8oxd339im8yud --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-510563-m02": (2.888501227s)
	I1212 23:21:32.113523  156765 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 23:21:32.378349  156765 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1212 23:21:32.378530  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=multinode-510563 minikube.k8s.io/updated_at=2023_12_12T23_21_32_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:21:32.500085  156765 command_runner.go:130] > node/multinode-510563-m02 labeled
	I1212 23:21:32.502234  156765 start.go:306] JoinCluster complete in 3.479772642s
	I1212 23:21:32.502256  156765 cni.go:84] Creating CNI manager for ""
	I1212 23:21:32.502264  156765 cni.go:136] 2 nodes found, recommending kindnet
	I1212 23:21:32.502323  156765 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 23:21:32.508421  156765 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 23:21:32.508454  156765 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 23:21:32.508466  156765 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 23:21:32.508477  156765 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:21:32.508486  156765 command_runner.go:130] > Access: 2023-12-12 23:20:03.268996615 +0000
	I1212 23:21:32.508494  156765 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 23:21:32.508506  156765 command_runner.go:130] > Change: 2023-12-12 23:20:01.386996615 +0000
	I1212 23:21:32.508513  156765 command_runner.go:130] >  Birth: -
	I1212 23:21:32.509255  156765 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 23:21:32.509270  156765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 23:21:32.533297  156765 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 23:21:32.867698  156765 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 23:21:32.872251  156765 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 23:21:32.875660  156765 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 23:21:32.892144  156765 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 23:21:32.895513  156765 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:21:32.895714  156765 kapi.go:59] client config for multinode-510563: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:21:32.895975  156765 round_trippers.go:463] GET https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:21:32.895987  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:32.896001  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:32.896007  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:32.898483  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:32.898500  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:32.898507  156765 round_trippers.go:580]     Audit-Id: 9f6e77f0-5f39-47f8-8186-f2ea8f0eddce
	I1212 23:21:32.898512  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:32.898517  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:32.898522  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:32.898527  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:32.898532  156765 round_trippers.go:580]     Content-Length: 291
	I1212 23:21:32.898540  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:32 GMT
	I1212 23:21:32.898561  156765 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b98a4ef4-74a2-4d35-a29b-c065c5f3121c","resourceVersion":"460","creationTimestamp":"2023-12-12T23:20:36Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 23:21:32.898642  156765 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-510563" context rescaled to 1 replicas
	I1212 23:21:32.898668  156765 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 23:21:32.900282  156765 out.go:177] * Verifying Kubernetes components...
	I1212 23:21:32.902059  156765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:21:32.919491  156765 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:21:32.919752  156765 kapi.go:59] client config for multinode-510563: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:21:32.920024  156765 node_ready.go:35] waiting up to 6m0s for node "multinode-510563-m02" to be "Ready" ...
	I1212 23:21:32.920096  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:32.920107  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:32.920118  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:32.920130  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:32.923290  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:21:32.923312  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:32.923324  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:32.923332  156765 round_trippers.go:580]     Content-Length: 4083
	I1212 23:21:32.923340  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:32 GMT
	I1212 23:21:32.923348  156765 round_trippers.go:580]     Audit-Id: 1fa5c45c-9b9f-49f1-b951-c237c2d896e7
	I1212 23:21:32.923362  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:32.923375  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:32.923383  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:32.923546  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"512","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I1212 23:21:32.923919  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:32.923935  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:32.923943  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:32.923949  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:32.926240  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:32.926262  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:32.926273  156765 round_trippers.go:580]     Audit-Id: 17258279-1fe2-4c90-ac56-af8ffb80d51b
	I1212 23:21:32.926282  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:32.926292  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:32.926304  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:32.926316  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:32.926327  156765 round_trippers.go:580]     Content-Length: 4083
	I1212 23:21:32.926335  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:32 GMT
	I1212 23:21:32.926423  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"512","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3059 chars]
	I1212 23:21:33.427561  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:33.427587  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:33.427596  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:33.427602  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:33.559654  156765 round_trippers.go:574] Response Status: 200 OK in 132 milliseconds
	I1212 23:21:33.559689  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:33.559701  156765 round_trippers.go:580]     Audit-Id: 6e641b9d-389a-4468-adf4-10a9500419a2
	I1212 23:21:33.559710  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:33.559719  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:33.559726  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:33.559734  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:33.559742  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:33 GMT
	I1212 23:21:33.560914  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:33.927548  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:33.927581  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:33.927594  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:33.927604  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:33.930350  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:33.930373  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:33.930382  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:33.930389  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:33.930397  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:33 GMT
	I1212 23:21:33.930404  156765 round_trippers.go:580]     Audit-Id: b54adc95-d832-45b8-a8a7-c6e463260122
	I1212 23:21:33.930416  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:33.930428  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:33.930641  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:34.427178  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:34.427223  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:34.427237  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:34.427247  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:34.430443  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:21:34.430470  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:34.430480  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:34 GMT
	I1212 23:21:34.430489  156765 round_trippers.go:580]     Audit-Id: 0efe656f-541c-4eed-ac79-e9168bbf9596
	I1212 23:21:34.430497  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:34.430507  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:34.430517  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:34.430526  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:34.431074  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:34.927075  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:34.927104  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:34.927114  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:34.927122  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:35.005633  156765 round_trippers.go:574] Response Status: 200 OK in 78 milliseconds
	I1212 23:21:35.005666  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:35.005677  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:35.005686  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:35.005694  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:35.005703  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:35.005711  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:35 GMT
	I1212 23:21:35.005727  156765 round_trippers.go:580]     Audit-Id: b1b18195-5ef7-4a85-b12e-dea658df2501
	I1212 23:21:35.005858  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:35.006217  156765 node_ready.go:58] node "multinode-510563-m02" has status "Ready":"False"
	I1212 23:21:35.427458  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:35.427486  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:35.427494  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:35.427501  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:35.432761  156765 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:21:35.432788  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:35.432800  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:35.432822  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:35.432830  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:35.432837  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:35.432843  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:35 GMT
	I1212 23:21:35.432855  156765 round_trippers.go:580]     Audit-Id: 8d725dfa-d944-44c9-acb5-8606cef85224
	I1212 23:21:35.433586  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:35.927268  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:35.927295  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:35.927303  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:35.927309  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:35.930369  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:21:35.930396  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:35.930407  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:35.930415  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:35.930423  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:35.930431  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:35.930439  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:35 GMT
	I1212 23:21:35.930457  156765 round_trippers.go:580]     Audit-Id: 79acc855-d197-45a4-823e-d06ec6eb024e
	I1212 23:21:35.930646  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:36.427278  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:36.427306  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:36.427314  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:36.427325  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:36.432341  156765 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:21:36.432367  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:36.432378  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:36.432387  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:36 GMT
	I1212 23:21:36.432392  156765 round_trippers.go:580]     Audit-Id: 18405f44-6c57-4b88-a5ac-612ee3a6a7df
	I1212 23:21:36.432397  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:36.432406  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:36.432410  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:36.433069  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:36.927785  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:36.927815  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:36.927823  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:36.927829  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:36.930744  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:36.930771  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:36.930779  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:36.930787  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:36.930795  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:36.930803  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:36.930811  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:36 GMT
	I1212 23:21:36.930818  156765 round_trippers.go:580]     Audit-Id: d59a0ba7-3371-45ce-882a-ede1b6e726ee
	I1212 23:21:36.931004  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:37.427766  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:37.427803  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:37.427812  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:37.427818  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:37.431012  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:21:37.431033  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:37.431040  156765 round_trippers.go:580]     Audit-Id: 2c6f800e-d33f-47ce-82ef-ab1b5a09655d
	I1212 23:21:37.431045  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:37.431050  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:37.431055  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:37.431060  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:37.431067  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:37 GMT
	I1212 23:21:37.431460  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:37.431735  156765 node_ready.go:58] node "multinode-510563-m02" has status "Ready":"False"
	I1212 23:21:37.927148  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:37.927173  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:37.927186  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:37.927196  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:37.930719  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:21:37.930739  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:37.930746  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:37.930751  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:37.930757  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:37.930762  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:37 GMT
	I1212 23:21:37.930767  156765 round_trippers.go:580]     Audit-Id: c183c670-1ee7-4f88-bdfe-90302c7ad563
	I1212 23:21:37.930772  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:37.931458  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:38.427173  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:38.427213  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:38.427225  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:38.427234  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:38.429834  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:38.429855  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:38.429864  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:38.429872  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:38.429880  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:38.429889  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:38.429895  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:38 GMT
	I1212 23:21:38.429900  156765 round_trippers.go:580]     Audit-Id: a46db57d-50f5-4580-9e19-97a689d51e62
	I1212 23:21:38.430214  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:38.927122  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:38.927153  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:38.927163  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:38.927170  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:38.929941  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:38.929967  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:38.929978  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:38.929988  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:38 GMT
	I1212 23:21:38.929997  156765 round_trippers.go:580]     Audit-Id: 7666ea10-d70e-479c-a327-177f98e622ed
	I1212 23:21:38.930006  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:38.930014  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:38.930024  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:38.930168  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:39.427207  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:39.427235  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:39.427248  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:39.427268  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:39.431509  156765 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:21:39.431536  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:39.431546  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:39 GMT
	I1212 23:21:39.431555  156765 round_trippers.go:580]     Audit-Id: 54782d4e-d1a7-4039-8485-42ed4e429840
	I1212 23:21:39.431563  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:39.431572  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:39.431579  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:39.431587  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:39.432042  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:39.432339  156765 node_ready.go:58] node "multinode-510563-m02" has status "Ready":"False"
	I1212 23:21:39.927610  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:39.927646  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:39.927659  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:39.927670  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:39.932035  156765 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:21:39.932067  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:39.932078  156765 round_trippers.go:580]     Audit-Id: 88fae218-35c7-4499-bc60-a87adefc8699
	I1212 23:21:39.932087  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:39.932096  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:39.932105  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:39.932114  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:39.932123  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:39 GMT
	I1212 23:21:39.933179  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:40.427878  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:40.427906  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:40.427914  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:40.427920  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:40.430934  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:40.430962  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:40.430972  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:40.430981  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:40.430988  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:40.430995  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:40 GMT
	I1212 23:21:40.431002  156765 round_trippers.go:580]     Audit-Id: 4ced1561-61ce-4a16-b11b-0d6d3add9d9a
	I1212 23:21:40.431009  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:40.431192  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:40.926854  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:40.926881  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:40.926889  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:40.926895  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:40.929783  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:40.929811  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:40.929819  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:40.929824  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:40.929830  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:40.929839  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:40 GMT
	I1212 23:21:40.929844  156765 round_trippers.go:580]     Audit-Id: 4608987a-fdec-479b-b90d-ef72072e9c40
	I1212 23:21:40.929850  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:40.930178  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:41.427891  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:41.427915  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:41.427923  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:41.427929  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:41.430283  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:41.430308  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:41.430318  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:41 GMT
	I1212 23:21:41.430326  156765 round_trippers.go:580]     Audit-Id: 66b43728-e116-4fe3-8028-0b6bd1fc8a93
	I1212 23:21:41.430334  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:41.430342  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:41.430349  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:41.430357  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:41.430896  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"515","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3168 chars]
	I1212 23:21:41.927658  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:41.927691  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:41.927704  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:41.927713  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:41.931381  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:21:41.931413  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:41.931420  156765 round_trippers.go:580]     Audit-Id: 792c48b6-f1c6-4b1e-8ea8-3b3268536bc0
	I1212 23:21:41.931425  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:41.931430  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:41.931435  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:41.931440  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:41.931446  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:41 GMT
	I1212 23:21:41.931649  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"537","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3437 chars]
	I1212 23:21:41.931917  156765 node_ready.go:58] node "multinode-510563-m02" has status "Ready":"False"
	I1212 23:21:42.427316  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:42.427341  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:42.427349  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:42.427355  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:42.429979  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:42.430000  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:42.430008  156765 round_trippers.go:580]     Audit-Id: 93224aa5-67d7-46fb-a6c9-bb4905ae1b0a
	I1212 23:21:42.430013  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:42.430019  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:42.430023  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:42.430028  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:42.430033  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:42 GMT
	I1212 23:21:42.430320  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"540","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3254 chars]
	I1212 23:21:42.430688  156765 node_ready.go:49] node "multinode-510563-m02" has status "Ready":"True"
	I1212 23:21:42.430710  156765 node_ready.go:38] duration metric: took 9.510668148s waiting for node "multinode-510563-m02" to be "Ready" ...
	I1212 23:21:42.430721  156765 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:21:42.430787  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:21:42.430798  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:42.430808  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:42.430822  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:42.435411  156765 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:21:42.435428  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:42.435435  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:42.435440  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:42.435445  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:42 GMT
	I1212 23:21:42.435450  156765 round_trippers.go:580]     Audit-Id: ddc3e4de-8c33-4769-894f-021c5fe8db29
	I1212 23:21:42.435455  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:42.435460  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:42.437246  156765 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"540"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"456","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67332 chars]
	I1212 23:21:42.439459  156765 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:42.439580  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:21:42.439592  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:42.439600  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:42.439606  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:42.442042  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:42.442063  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:42.442078  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:42.442086  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:42.442095  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:42 GMT
	I1212 23:21:42.442103  156765 round_trippers.go:580]     Audit-Id: e5290eec-5b62-4e36-9b64-4ad89ea50d1d
	I1212 23:21:42.442112  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:42.442121  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:42.442319  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"456","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1212 23:21:42.442693  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:21:42.442705  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:42.442712  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:42.442717  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:42.444654  156765 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:21:42.444672  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:42.444682  156765 round_trippers.go:580]     Audit-Id: 94b957da-0a26-4ed1-a86a-55d4bb404291
	I1212 23:21:42.444691  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:42.444698  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:42.444704  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:42.444714  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:42.444720  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:42 GMT
	I1212 23:21:42.444960  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:21:42.445215  156765 pod_ready.go:92] pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace has status "Ready":"True"
	I1212 23:21:42.445229  156765 pod_ready.go:81] duration metric: took 5.742504ms waiting for pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:42.445237  156765 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:42.445283  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:21:42.445290  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:42.445296  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:42.445302  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:42.447050  156765 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:21:42.447064  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:42.447073  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:42.447080  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:42 GMT
	I1212 23:21:42.447088  156765 round_trippers.go:580]     Audit-Id: 2ffbc22a-b8ee-412e-88fd-a6d67254320c
	I1212 23:21:42.447096  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:42.447104  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:42.447113  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:42.447223  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"442","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1212 23:21:42.447634  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:21:42.447649  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:42.447656  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:42.447662  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:42.449884  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:42.449900  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:42.449907  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:42.449913  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:42.449918  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:42 GMT
	I1212 23:21:42.449923  156765 round_trippers.go:580]     Audit-Id: ce447ec4-bf4d-4651-8b8c-11f364d3ffef
	I1212 23:21:42.449928  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:42.449939  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:42.450182  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:21:42.450468  156765 pod_ready.go:92] pod "etcd-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:21:42.450485  156765 pod_ready.go:81] duration metric: took 5.242923ms waiting for pod "etcd-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:42.450499  156765 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:42.450539  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-510563
	I1212 23:21:42.450547  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:42.450553  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:42.450559  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:42.452772  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:42.452793  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:42.452801  156765 round_trippers.go:580]     Audit-Id: 698668f7-385b-43c2-850d-00a7c1a2673d
	I1212 23:21:42.452809  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:42.452818  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:42.452826  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:42.452834  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:42.452841  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:42 GMT
	I1212 23:21:42.453001  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-510563","namespace":"kube-system","uid":"e8a8ed00-d13d-44f0-b7d6-b42bf1342d95","resourceVersion":"439","creationTimestamp":"2023-12-12T23:20:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.38:8443","kubernetes.io/config.hash":"4b970951c1b4ca2bc525afa7c2eb2fef","kubernetes.io/config.mirror":"4b970951c1b4ca2bc525afa7c2eb2fef","kubernetes.io/config.seen":"2023-12-12T23:20:27.932579600Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1212 23:21:42.453386  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:21:42.453399  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:42.453407  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:42.453413  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:42.455790  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:42.455808  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:42.455818  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:42.455826  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:42.455833  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:42.455851  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:42.455863  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:42 GMT
	I1212 23:21:42.455871  156765 round_trippers.go:580]     Audit-Id: 7eaa5d6e-dd01-49f4-b7be-aac49c600b29
	I1212 23:21:42.456503  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:21:42.456877  156765 pod_ready.go:92] pod "kube-apiserver-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:21:42.456896  156765 pod_ready.go:81] duration metric: took 6.390425ms waiting for pod "kube-apiserver-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:42.456908  156765 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:42.456967  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-510563
	I1212 23:21:42.456978  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:42.456987  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:42.456997  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:42.459359  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:42.459372  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:42.459378  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:42.459384  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:42.459389  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:42 GMT
	I1212 23:21:42.459393  156765 round_trippers.go:580]     Audit-Id: b9a8f3ea-6fa2-4c6b-ae4f-5328a6680123
	I1212 23:21:42.459401  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:42.459409  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:42.460082  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-510563","namespace":"kube-system","uid":"efdc7f68-25d6-4f6a-ab8f-1dec43407375","resourceVersion":"440","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"50f588add554ab298cca0792048dbecc","kubernetes.io/config.mirror":"50f588add554ab298cca0792048dbecc","kubernetes.io/config.seen":"2023-12-12T23:20:36.354954910Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1212 23:21:42.460520  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:21:42.460535  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:42.460545  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:42.460555  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:42.462479  156765 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:21:42.462498  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:42.462507  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:42 GMT
	I1212 23:21:42.462515  156765 round_trippers.go:580]     Audit-Id: 829262ed-526a-4862-a448-e8143408e839
	I1212 23:21:42.462523  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:42.462531  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:42.462539  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:42.462547  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:42.462849  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:21:42.463215  156765 pod_ready.go:92] pod "kube-controller-manager-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:21:42.463236  156765 pod_ready.go:81] duration metric: took 6.316862ms waiting for pod "kube-controller-manager-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:42.463247  156765 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hspw8" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:42.627699  156765 request.go:629] Waited for 164.371392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hspw8
	I1212 23:21:42.627791  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hspw8
	I1212 23:21:42.627799  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:42.627812  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:42.627823  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:42.630885  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:21:42.630910  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:42.630918  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:42.630925  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:42 GMT
	I1212 23:21:42.630932  156765 round_trippers.go:580]     Audit-Id: 6417a32e-2591-4594-b869-b09d349261b4
	I1212 23:21:42.630940  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:42.630950  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:42.630958  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:42.631232  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hspw8","generateName":"kube-proxy-","namespace":"kube-system","uid":"a2255be6-8705-40cd-8f35-a3e82906190c","resourceVersion":"421","creationTimestamp":"2023-12-12T23:20:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1212 23:21:42.828171  156765 request.go:629] Waited for 196.427408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:21:42.828229  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:21:42.828234  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:42.828242  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:42.828247  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:42.831114  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:42.831139  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:42.831150  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:42 GMT
	I1212 23:21:42.831160  156765 round_trippers.go:580]     Audit-Id: e33b140c-8fe1-4304-8050-95efed69e406
	I1212 23:21:42.831168  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:42.831176  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:42.831182  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:42.831187  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:42.831348  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:21:42.831706  156765 pod_ready.go:92] pod "kube-proxy-hspw8" in "kube-system" namespace has status "Ready":"True"
	I1212 23:21:42.831727  156765 pod_ready.go:81] duration metric: took 368.473133ms waiting for pod "kube-proxy-hspw8" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:42.831738  156765 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-msx8s" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:43.028204  156765 request.go:629] Waited for 196.396051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msx8s
	I1212 23:21:43.028269  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msx8s
	I1212 23:21:43.028273  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:43.028281  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:43.028287  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:43.031180  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:43.031212  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:43.031222  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:43.031231  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:43.031240  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:43 GMT
	I1212 23:21:43.031248  156765 round_trippers.go:580]     Audit-Id: 70f5b148-ca33-492e-bbd5-9b349cea7830
	I1212 23:21:43.031255  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:43.031260  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:43.031355  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-msx8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"f41b9a6d-8132-45a6-9847-5a762664b008","resourceVersion":"525","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1212 23:21:43.228170  156765 request.go:629] Waited for 196.403847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:43.228320  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:21:43.228341  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:43.228353  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:43.228363  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:43.233072  156765 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:21:43.233098  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:43.233107  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:43 GMT
	I1212 23:21:43.233114  156765 round_trippers.go:580]     Audit-Id: 68f338b5-654e-4d3a-8b08-4392e0b35250
	I1212 23:21:43.233127  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:43.233137  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:43.233145  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:43.233153  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:43.233331  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"541","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_21_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3134 chars]
	I1212 23:21:43.233578  156765 pod_ready.go:92] pod "kube-proxy-msx8s" in "kube-system" namespace has status "Ready":"True"
	I1212 23:21:43.233592  156765 pod_ready.go:81] duration metric: took 401.849281ms waiting for pod "kube-proxy-msx8s" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:43.233601  156765 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:43.428033  156765 request.go:629] Waited for 194.352584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-510563
	I1212 23:21:43.428097  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-510563
	I1212 23:21:43.428113  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:43.428121  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:43.428127  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:43.431238  156765 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:21:43.431258  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:43.431264  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:43.431270  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:43 GMT
	I1212 23:21:43.431275  156765 round_trippers.go:580]     Audit-Id: 3c51335e-2851-4b88-83c7-50a762b7b018
	I1212 23:21:43.431280  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:43.431284  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:43.431290  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:43.431656  156765 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-510563","namespace":"kube-system","uid":"044da73c-9466-4a43-b283-5f4b9cc04df9","resourceVersion":"441","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fdb335c77d5fb1581ea23fa0adf419e9","kubernetes.io/config.mirror":"fdb335c77d5fb1581ea23fa0adf419e9","kubernetes.io/config.seen":"2023-12-12T23:20:36.354955844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1212 23:21:43.628363  156765 request.go:629] Waited for 196.358874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:21:43.628467  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:21:43.628474  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:43.628486  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:43.628496  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:43.631265  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:43.631284  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:43.631291  156765 round_trippers.go:580]     Audit-Id: c64b313a-4229-4148-809b-0884f4f757b9
	I1212 23:21:43.631296  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:43.631301  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:43.631306  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:43.631311  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:43.631316  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:43 GMT
	I1212 23:21:43.631897  156765 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1212 23:21:43.632192  156765 pod_ready.go:92] pod "kube-scheduler-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:21:43.632206  156765 pod_ready.go:81] duration metric: took 398.598823ms waiting for pod "kube-scheduler-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:21:43.632215  156765 pod_ready.go:38] duration metric: took 1.201483842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:21:43.632226  156765 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:21:43.632268  156765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:21:43.646549  156765 system_svc.go:56] duration metric: took 14.314095ms WaitForService to wait for kubelet.
	I1212 23:21:43.646573  156765 kubeadm.go:581] duration metric: took 10.747881071s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:21:43.646591  156765 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:21:43.828008  156765 request.go:629] Waited for 181.328297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes
	I1212 23:21:43.828104  156765 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes
	I1212 23:21:43.828112  156765 round_trippers.go:469] Request Headers:
	I1212 23:21:43.828124  156765 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:21:43.828135  156765 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:21:43.830887  156765 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:21:43.830917  156765 round_trippers.go:577] Response Headers:
	I1212 23:21:43.830927  156765 round_trippers.go:580]     Audit-Id: dec4c64f-21a8-4ae2-b776-f5c4128b4037
	I1212 23:21:43.830933  156765 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:21:43.830939  156765 round_trippers.go:580]     Content-Type: application/json
	I1212 23:21:43.830947  156765 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:21:43.830955  156765 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:21:43.830964  156765 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:21:43 GMT
	I1212 23:21:43.831413  156765 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"542"},"items":[{"metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"432","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10077 chars]
	I1212 23:21:43.831835  156765 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:21:43.831873  156765 node_conditions.go:123] node cpu capacity is 2
	I1212 23:21:43.831891  156765 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:21:43.831896  156765 node_conditions.go:123] node cpu capacity is 2
	I1212 23:21:43.831900  156765 node_conditions.go:105] duration metric: took 185.303945ms to run NodePressure ...
	I1212 23:21:43.831910  156765 start.go:228] waiting for startup goroutines ...
	I1212 23:21:43.831935  156765 start.go:242] writing updated cluster config ...
	I1212 23:21:43.832221  156765 ssh_runner.go:195] Run: rm -f paused
	I1212 23:21:43.878975  156765 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 23:21:43.882052  156765 out.go:177] * Done! kubectl is now configured to use "multinode-510563" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:20:02 UTC, ends at Tue 2023-12-12 23:21:54 UTC. --
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.076235464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423314076222143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0dccbab4-4402-46c7-9ce0-d1fdd47a66aa name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.076625525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2a3145c5-ce5f-4e1c-b5ae-1ab8767f2f5d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.076707224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2a3145c5-ce5f-4e1c-b5ae-1ab8767f2f5d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.077108607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a33f7c50dc936ad7f9d5a315a8cf353d6dcf89a81b0904d255ab1b156bb89e53,PodSandboxId:340c2e795ec275cc3f3964badf46fd8ec46aa4300742246f5031492633b3133d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702423309753542196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4vnmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00d42ae1-e3c5-461d-9019-b5609191598e,},Annotations:map[string]string{io.kubernetes.container.hash: a8d1b557,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d6c97b8f177574562a4e7291ba3d5699d442c3e799fcf6bc5d5c6586711660,PodSandboxId:cfa8305a415fa16a69bfacebbb5c25f22d07eaadccb7a5a7aef317bb41815e8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423257020559089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcxks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503de693-19d6-45c5-97c6-3b8e5657bfee,},Annotations:map[string]string{io.kubernetes.container.hash: ced1e245,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a922b6d56264bd8179463758250e45ad3ddd799b7643a8d30ef779da4d00d0e,PodSandboxId:fb0a72db2f114323e341eeac9c006b7405dfde9dde2fdfc0968347400362af52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423256739367819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: cb4f186a-9bb9-488f-8a74-6e01f352fc05,},Annotations:map[string]string{io.kubernetes.container.hash: 181c10d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc21d2809082dd6b2a02c76aada4170fe160741303091f4ec745b65babf5c4d2,PodSandboxId:b8bf53774602d410f1757733319f1d4e091344dbb705c951b324c1c467b48e2c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702423253883881228,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v4js8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cfe24f85-472c-4ef2-9a48-9e3647cc8feb,},Annotations:map[string]string{io.kubernetes.container.hash: a0875e0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17633bb992c3c03ff3efdb724e554956c6cb9b125ef74c259e8993124e52534,PodSandboxId:252ef334b5020c2720eeaaa75f3294e8ed1ba30163d776b3b6c2f4c7a717f76e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423251547826677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hspw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2255be6-8705-40cd-8f35-a3e829
06190c,},Annotations:map[string]string{io.kubernetes.container.hash: d38d9edd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590b1be7a8dfe1d796fc8e243464af8f50f73b772a8b314522151b8bf926e0e8,PodSandboxId:7320729837d9c95902d4b334a23a6a140349ae743f71c3f9424ee9cdbbc70e64,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423229484703485,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99da52f53b721a1a612acc1bca02d501,},Annotations:map[string]string{io.kubernetes
.container.hash: 52ffbf68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1009384227c15b35e74fed818350fd0bb27595ebbd39a5d68ba2b7cf9b032705,PodSandboxId:19b8fd4257911829de069ddbe9b5f897e2bae819f46c63d745760297b4521b89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423229224826101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb335c77d5fb1581ea23fa0adf419e9,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24febbad2ba0b2bb214a84438466a95c93679bc8df4106b4f4d4ab5653ef760,PodSandboxId:939c3822594ad5ed41abd6efa321ef3461d95680d7b4ed7aa2c949b0d3c238b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423229133223996,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f588add554ab298cca0792048dbecc,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe05b10bfcd6f6175b47556313838815da9a96a03c510f0440e507fb82c5f96,PodSandboxId:692034b0454fae3067d20c67be9c0c3fbe461c4884e7f54cb0221c0c7356bbdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423229168784290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b970951c1b4ca2bc525afa7c2eb2fef,},Annotations:map[string]string{io.kubernetes
.container.hash: 46ec173a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2a3145c5-ce5f-4e1c-b5ae-1ab8767f2f5d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.112843525Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=761de983-88fb-4128-b1b8-5224a720a780 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.112927151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=761de983-88fb-4128-b1b8-5224a720a780 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.114446217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e7620e54-89cf-4bc8-8b00-ef26642ebb74 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.114830553Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423314114816927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e7620e54-89cf-4bc8-8b00-ef26642ebb74 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.115271114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=39d9db71-4beb-4151-8775-d001eda89feb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.115342362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=39d9db71-4beb-4151-8775-d001eda89feb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.115516191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a33f7c50dc936ad7f9d5a315a8cf353d6dcf89a81b0904d255ab1b156bb89e53,PodSandboxId:340c2e795ec275cc3f3964badf46fd8ec46aa4300742246f5031492633b3133d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702423309753542196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4vnmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00d42ae1-e3c5-461d-9019-b5609191598e,},Annotations:map[string]string{io.kubernetes.container.hash: a8d1b557,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d6c97b8f177574562a4e7291ba3d5699d442c3e799fcf6bc5d5c6586711660,PodSandboxId:cfa8305a415fa16a69bfacebbb5c25f22d07eaadccb7a5a7aef317bb41815e8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423257020559089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcxks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503de693-19d6-45c5-97c6-3b8e5657bfee,},Annotations:map[string]string{io.kubernetes.container.hash: ced1e245,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a922b6d56264bd8179463758250e45ad3ddd799b7643a8d30ef779da4d00d0e,PodSandboxId:fb0a72db2f114323e341eeac9c006b7405dfde9dde2fdfc0968347400362af52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423256739367819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: cb4f186a-9bb9-488f-8a74-6e01f352fc05,},Annotations:map[string]string{io.kubernetes.container.hash: 181c10d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc21d2809082dd6b2a02c76aada4170fe160741303091f4ec745b65babf5c4d2,PodSandboxId:b8bf53774602d410f1757733319f1d4e091344dbb705c951b324c1c467b48e2c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702423253883881228,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v4js8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cfe24f85-472c-4ef2-9a48-9e3647cc8feb,},Annotations:map[string]string{io.kubernetes.container.hash: a0875e0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17633bb992c3c03ff3efdb724e554956c6cb9b125ef74c259e8993124e52534,PodSandboxId:252ef334b5020c2720eeaaa75f3294e8ed1ba30163d776b3b6c2f4c7a717f76e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423251547826677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hspw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2255be6-8705-40cd-8f35-a3e829
06190c,},Annotations:map[string]string{io.kubernetes.container.hash: d38d9edd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590b1be7a8dfe1d796fc8e243464af8f50f73b772a8b314522151b8bf926e0e8,PodSandboxId:7320729837d9c95902d4b334a23a6a140349ae743f71c3f9424ee9cdbbc70e64,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423229484703485,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99da52f53b721a1a612acc1bca02d501,},Annotations:map[string]string{io.kubernetes
.container.hash: 52ffbf68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1009384227c15b35e74fed818350fd0bb27595ebbd39a5d68ba2b7cf9b032705,PodSandboxId:19b8fd4257911829de069ddbe9b5f897e2bae819f46c63d745760297b4521b89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423229224826101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb335c77d5fb1581ea23fa0adf419e9,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24febbad2ba0b2bb214a84438466a95c93679bc8df4106b4f4d4ab5653ef760,PodSandboxId:939c3822594ad5ed41abd6efa321ef3461d95680d7b4ed7aa2c949b0d3c238b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423229133223996,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f588add554ab298cca0792048dbecc,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe05b10bfcd6f6175b47556313838815da9a96a03c510f0440e507fb82c5f96,PodSandboxId:692034b0454fae3067d20c67be9c0c3fbe461c4884e7f54cb0221c0c7356bbdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423229168784290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b970951c1b4ca2bc525afa7c2eb2fef,},Annotations:map[string]string{io.kubernetes
.container.hash: 46ec173a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=39d9db71-4beb-4151-8775-d001eda89feb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.154950726Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6d106591-8b6a-48f7-b18d-bccaede43ccd name=/runtime.v1.RuntimeService/Version
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.155115189Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6d106591-8b6a-48f7-b18d-bccaede43ccd name=/runtime.v1.RuntimeService/Version
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.156531343Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a5b5e1ad-4b15-4a5e-86b5-a2d3a34fb4e5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.156993504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423314156964790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a5b5e1ad-4b15-4a5e-86b5-a2d3a34fb4e5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.157838483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2e8f1259-0a6f-4a80-b21f-660dd8e3f0f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.157908429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2e8f1259-0a6f-4a80-b21f-660dd8e3f0f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.158166930Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a33f7c50dc936ad7f9d5a315a8cf353d6dcf89a81b0904d255ab1b156bb89e53,PodSandboxId:340c2e795ec275cc3f3964badf46fd8ec46aa4300742246f5031492633b3133d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702423309753542196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4vnmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00d42ae1-e3c5-461d-9019-b5609191598e,},Annotations:map[string]string{io.kubernetes.container.hash: a8d1b557,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d6c97b8f177574562a4e7291ba3d5699d442c3e799fcf6bc5d5c6586711660,PodSandboxId:cfa8305a415fa16a69bfacebbb5c25f22d07eaadccb7a5a7aef317bb41815e8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423257020559089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcxks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503de693-19d6-45c5-97c6-3b8e5657bfee,},Annotations:map[string]string{io.kubernetes.container.hash: ced1e245,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a922b6d56264bd8179463758250e45ad3ddd799b7643a8d30ef779da4d00d0e,PodSandboxId:fb0a72db2f114323e341eeac9c006b7405dfde9dde2fdfc0968347400362af52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423256739367819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: cb4f186a-9bb9-488f-8a74-6e01f352fc05,},Annotations:map[string]string{io.kubernetes.container.hash: 181c10d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc21d2809082dd6b2a02c76aada4170fe160741303091f4ec745b65babf5c4d2,PodSandboxId:b8bf53774602d410f1757733319f1d4e091344dbb705c951b324c1c467b48e2c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702423253883881228,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v4js8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cfe24f85-472c-4ef2-9a48-9e3647cc8feb,},Annotations:map[string]string{io.kubernetes.container.hash: a0875e0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17633bb992c3c03ff3efdb724e554956c6cb9b125ef74c259e8993124e52534,PodSandboxId:252ef334b5020c2720eeaaa75f3294e8ed1ba30163d776b3b6c2f4c7a717f76e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423251547826677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hspw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2255be6-8705-40cd-8f35-a3e829
06190c,},Annotations:map[string]string{io.kubernetes.container.hash: d38d9edd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590b1be7a8dfe1d796fc8e243464af8f50f73b772a8b314522151b8bf926e0e8,PodSandboxId:7320729837d9c95902d4b334a23a6a140349ae743f71c3f9424ee9cdbbc70e64,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423229484703485,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99da52f53b721a1a612acc1bca02d501,},Annotations:map[string]string{io.kubernetes
.container.hash: 52ffbf68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1009384227c15b35e74fed818350fd0bb27595ebbd39a5d68ba2b7cf9b032705,PodSandboxId:19b8fd4257911829de069ddbe9b5f897e2bae819f46c63d745760297b4521b89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423229224826101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb335c77d5fb1581ea23fa0adf419e9,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24febbad2ba0b2bb214a84438466a95c93679bc8df4106b4f4d4ab5653ef760,PodSandboxId:939c3822594ad5ed41abd6efa321ef3461d95680d7b4ed7aa2c949b0d3c238b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423229133223996,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f588add554ab298cca0792048dbecc,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe05b10bfcd6f6175b47556313838815da9a96a03c510f0440e507fb82c5f96,PodSandboxId:692034b0454fae3067d20c67be9c0c3fbe461c4884e7f54cb0221c0c7356bbdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423229168784290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b970951c1b4ca2bc525afa7c2eb2fef,},Annotations:map[string]string{io.kubernetes
.container.hash: 46ec173a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2e8f1259-0a6f-4a80-b21f-660dd8e3f0f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.199663476Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6cb27043-fe83-421a-b9b4-feba7010c964 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.199746689Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6cb27043-fe83-421a-b9b4-feba7010c964 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.201293500Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=563c1a27-4337-4156-863d-50023858f50b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.201672915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423314201659199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=563c1a27-4337-4156-863d-50023858f50b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.202258104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2affafc6-d835-4c51-a414-5d183759a626 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.202323718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2affafc6-d835-4c51-a414-5d183759a626 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:21:54 multinode-510563 crio[718]: time="2023-12-12 23:21:54.202513109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a33f7c50dc936ad7f9d5a315a8cf353d6dcf89a81b0904d255ab1b156bb89e53,PodSandboxId:340c2e795ec275cc3f3964badf46fd8ec46aa4300742246f5031492633b3133d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702423309753542196,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4vnmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00d42ae1-e3c5-461d-9019-b5609191598e,},Annotations:map[string]string{io.kubernetes.container.hash: a8d1b557,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d6c97b8f177574562a4e7291ba3d5699d442c3e799fcf6bc5d5c6586711660,PodSandboxId:cfa8305a415fa16a69bfacebbb5c25f22d07eaadccb7a5a7aef317bb41815e8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423257020559089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcxks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503de693-19d6-45c5-97c6-3b8e5657bfee,},Annotations:map[string]string{io.kubernetes.container.hash: ced1e245,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a922b6d56264bd8179463758250e45ad3ddd799b7643a8d30ef779da4d00d0e,PodSandboxId:fb0a72db2f114323e341eeac9c006b7405dfde9dde2fdfc0968347400362af52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423256739367819,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: cb4f186a-9bb9-488f-8a74-6e01f352fc05,},Annotations:map[string]string{io.kubernetes.container.hash: 181c10d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc21d2809082dd6b2a02c76aada4170fe160741303091f4ec745b65babf5c4d2,PodSandboxId:b8bf53774602d410f1757733319f1d4e091344dbb705c951b324c1c467b48e2c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702423253883881228,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v4js8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cfe24f85-472c-4ef2-9a48-9e3647cc8feb,},Annotations:map[string]string{io.kubernetes.container.hash: a0875e0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17633bb992c3c03ff3efdb724e554956c6cb9b125ef74c259e8993124e52534,PodSandboxId:252ef334b5020c2720eeaaa75f3294e8ed1ba30163d776b3b6c2f4c7a717f76e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423251547826677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hspw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2255be6-8705-40cd-8f35-a3e829
06190c,},Annotations:map[string]string{io.kubernetes.container.hash: d38d9edd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590b1be7a8dfe1d796fc8e243464af8f50f73b772a8b314522151b8bf926e0e8,PodSandboxId:7320729837d9c95902d4b334a23a6a140349ae743f71c3f9424ee9cdbbc70e64,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423229484703485,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99da52f53b721a1a612acc1bca02d501,},Annotations:map[string]string{io.kubernetes
.container.hash: 52ffbf68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1009384227c15b35e74fed818350fd0bb27595ebbd39a5d68ba2b7cf9b032705,PodSandboxId:19b8fd4257911829de069ddbe9b5f897e2bae819f46c63d745760297b4521b89,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423229224826101,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb335c77d5fb1581ea23fa0adf419e9,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24febbad2ba0b2bb214a84438466a95c93679bc8df4106b4f4d4ab5653ef760,PodSandboxId:939c3822594ad5ed41abd6efa321ef3461d95680d7b4ed7aa2c949b0d3c238b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423229133223996,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f588add554ab298cca0792048dbecc,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe05b10bfcd6f6175b47556313838815da9a96a03c510f0440e507fb82c5f96,PodSandboxId:692034b0454fae3067d20c67be9c0c3fbe461c4884e7f54cb0221c0c7356bbdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423229168784290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b970951c1b4ca2bc525afa7c2eb2fef,},Annotations:map[string]string{io.kubernetes
.container.hash: 46ec173a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2affafc6-d835-4c51-a414-5d183759a626 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a33f7c50dc936       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   340c2e795ec27       busybox-5bc68d56bd-4vnmj
	e3d6c97b8f177       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      57 seconds ago       Running             coredns                   0                   cfa8305a415fa       coredns-5dd5756b68-zcxks
	1a922b6d56264       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      57 seconds ago       Running             storage-provisioner       0                   fb0a72db2f114       storage-provisioner
	fc21d2809082d       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      About a minute ago   Running             kindnet-cni               0                   b8bf53774602d       kindnet-v4js8
	f17633bb992c3       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   252ef334b5020       kube-proxy-hspw8
	590b1be7a8dfe       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   7320729837d9c       etcd-multinode-510563
	1009384227c15       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   19b8fd4257911       kube-scheduler-multinode-510563
	0fe05b10bfcd6       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   692034b0454fa       kube-apiserver-multinode-510563
	d24febbad2ba0       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   939c3822594ad       kube-controller-manager-multinode-510563
	
	* 
	* ==> coredns [e3d6c97b8f177574562a4e7291ba3d5699d442c3e799fcf6bc5d5c6586711660] <==
	* [INFO] 10.244.0.3:45208 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118258s
	[INFO] 10.244.1.2:40573 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150358s
	[INFO] 10.244.1.2:47833 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002186334s
	[INFO] 10.244.1.2:37476 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010789s
	[INFO] 10.244.1.2:39436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079055s
	[INFO] 10.244.1.2:48942 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001475722s
	[INFO] 10.244.1.2:49084 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076201s
	[INFO] 10.244.1.2:48570 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127732s
	[INFO] 10.244.1.2:50979 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141346s
	[INFO] 10.244.0.3:55201 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130892s
	[INFO] 10.244.0.3:35969 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093572s
	[INFO] 10.244.0.3:50188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088962s
	[INFO] 10.244.0.3:55376 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085205s
	[INFO] 10.244.1.2:54583 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234373s
	[INFO] 10.244.1.2:51020 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130245s
	[INFO] 10.244.1.2:46785 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000255588s
	[INFO] 10.244.1.2:36501 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117935s
	[INFO] 10.244.0.3:34437 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096545s
	[INFO] 10.244.0.3:55061 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179752s
	[INFO] 10.244.0.3:39955 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000150442s
	[INFO] 10.244.0.3:39456 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000307006s
	[INFO] 10.244.1.2:53741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143609s
	[INFO] 10.244.1.2:60060 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130524s
	[INFO] 10.244.1.2:52844 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079464s
	[INFO] 10.244.1.2:35064 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079483s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-510563
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-510563
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=multinode-510563
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_20_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:20:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-510563
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:21:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:20:55 +0000   Tue, 12 Dec 2023 23:20:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:20:55 +0000   Tue, 12 Dec 2023 23:20:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:20:55 +0000   Tue, 12 Dec 2023 23:20:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:20:55 +0000   Tue, 12 Dec 2023 23:20:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    multinode-510563
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ea9003964b849fbada5a3ef7b0b44a7
	  System UUID:                4ea90039-64b8-49fb-ada5-a3ef7b0b44a7
	  Boot ID:                    82ff4e33-1e33-4a4f-9b4e-a73a98e9fd0e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-4vnmj                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  kube-system                 coredns-5dd5756b68-zcxks                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     66s
	  kube-system                 etcd-multinode-510563                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 kindnet-v4js8                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      65s
	  kube-system                 kube-apiserver-multinode-510563             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-controller-manager-multinode-510563    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-proxy-hspw8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-scheduler-multinode-510563             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  86s (x8 over 87s)  kubelet          Node multinode-510563 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s (x8 over 87s)  kubelet          Node multinode-510563 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s (x7 over 87s)  kubelet          Node multinode-510563 status is now: NodeHasSufficientPID
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node multinode-510563 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node multinode-510563 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s                kubelet          Node multinode-510563 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           66s                node-controller  Node multinode-510563 event: Registered Node multinode-510563 in Controller
	  Normal  NodeReady                59s                kubelet          Node multinode-510563 status is now: NodeReady
	
	
	Name:               multinode-510563-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-510563-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=multinode-510563
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T23_21_32_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:21:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-510563-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:21:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:21:41 +0000   Tue, 12 Dec 2023 23:21:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:21:41 +0000   Tue, 12 Dec 2023 23:21:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:21:41 +0000   Tue, 12 Dec 2023 23:21:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:21:41 +0000   Tue, 12 Dec 2023 23:21:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    multinode-510563-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 01ff35da603346c58e26ad58a3d3ca74
	  System UUID:                01ff35da-6033-46c5-8e26-ad58a3d3ca74
	  Boot ID:                    bd39a86f-efc2-4469-ad3a-603d6cbd436e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-6hjc6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  kube-system                 kindnet-5v7sf               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23s
	  kube-system                 kube-proxy-msx8s            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientMemory  23s (x5 over 24s)  kubelet          Node multinode-510563-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x5 over 24s)  kubelet          Node multinode-510563-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x5 over 24s)  kubelet          Node multinode-510563-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21s                node-controller  Node multinode-510563-m02 event: Registered Node multinode-510563-m02 in Controller
	  Normal  NodeReady                13s                kubelet          Node multinode-510563-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Dec12 23:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067398] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.342728] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec12 23:20] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.137974] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.034947] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.136455] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.102252] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.148326] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.102878] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.218984] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +9.698013] systemd-fstab-generator[929]: Ignoring "noauto" for root device
	[  +8.759142] systemd-fstab-generator[1258]: Ignoring "noauto" for root device
	[ +21.518023] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [590b1be7a8dfe1d796fc8e243464af8f50f73b772a8b314522151b8bf926e0e8] <==
	* {"level":"info","ts":"2023-12-12T23:20:31.261115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T23:20:31.261178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T23:20:31.261195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgPreVoteResp from 38b26e584d45e0da at term 1"}
	{"level":"info","ts":"2023-12-12T23:20:31.261206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:20:31.261212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgVoteResp from 38b26e584d45e0da at term 2"}
	{"level":"info","ts":"2023-12-12T23:20:31.26122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:20:31.261227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 38b26e584d45e0da elected leader 38b26e584d45e0da at term 2"}
	{"level":"info","ts":"2023-12-12T23:20:31.264085Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:20:31.266561Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"38b26e584d45e0da","local-member-attributes":"{Name:multinode-510563 ClientURLs:[https://192.168.39.38:2379]}","request-path":"/0/members/38b26e584d45e0da/attributes","cluster-id":"afb1a6a08b4dab74","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:20:31.266708Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:20:31.267285Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"afb1a6a08b4dab74","local-member-id":"38b26e584d45e0da","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:20:31.267338Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:20:31.267354Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:20:31.267369Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:20:31.274255Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.38:2379"}
	{"level":"info","ts":"2023-12-12T23:20:31.283358Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:20:31.294054Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:20:31.294112Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-12-12T23:21:33.552903Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.053315ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16202416954789222636 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:491 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-12T23:21:33.553407Z","caller":"traceutil/trace.go:171","msg":"trace[1583865060] linearizableReadLoop","detail":"{readStateIndex:536; appliedIndex:534; }","duration":"127.838479ms","start":"2023-12-12T23:21:33.425535Z","end":"2023-12-12T23:21:33.553373Z","steps":["trace[1583865060] 'read index received'  (duration: 126.933068ms)","trace[1583865060] 'applied index is now lower than readState.Index'  (duration: 904.744µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T23:21:33.553534Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.008448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-510563-m02\" ","response":"range_response_count:1 size:3008"}
	{"level":"info","ts":"2023-12-12T23:21:33.55358Z","caller":"traceutil/trace.go:171","msg":"trace[1382180543] range","detail":"{range_begin:/registry/minions/multinode-510563-m02; range_end:; response_count:1; response_revision:515; }","duration":"128.057985ms","start":"2023-12-12T23:21:33.425512Z","end":"2023-12-12T23:21:33.55357Z","steps":["trace[1382180543] 'agreement among raft nodes before linearized reading'  (duration: 127.959417ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:21:33.55381Z","caller":"traceutil/trace.go:171","msg":"trace[185186584] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"277.49349ms","start":"2023-12-12T23:21:33.276303Z","end":"2023-12-12T23:21:33.553797Z","steps":["trace[185186584] 'process raft request'  (duration: 63.54395ms)","trace[185186584] 'compare'  (duration: 211.699314ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T23:21:33.554124Z","caller":"traceutil/trace.go:171","msg":"trace[1070081877] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"262.987455ms","start":"2023-12-12T23:21:33.291129Z","end":"2023-12-12T23:21:33.554117Z","steps":["trace[1070081877] 'process raft request'  (duration: 262.171968ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:21:33.605105Z","caller":"traceutil/trace.go:171","msg":"trace[1732031775] transaction","detail":"{read_only:false; response_revision:516; number_of_response:1; }","duration":"109.848493ms","start":"2023-12-12T23:21:33.495241Z","end":"2023-12-12T23:21:33.605089Z","steps":["trace[1732031775] 'process raft request'  (duration: 109.475877ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:21:54 up 2 min,  0 users,  load average: 0.84, 0.38, 0.14
	Linux multinode-510563 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [fc21d2809082dd6b2a02c76aada4170fe160741303091f4ec745b65babf5c4d2] <==
	* I1212 23:20:54.735088       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 23:20:54.735247       1 main.go:107] hostIP = 192.168.39.38
	podIP = 192.168.39.38
	I1212 23:20:54.735524       1 main.go:116] setting mtu 1500 for CNI 
	I1212 23:20:54.735566       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 23:20:54.735609       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 23:20:55.335593       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1212 23:20:55.428675       1 main.go:227] handling current node
	I1212 23:21:05.440643       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1212 23:21:05.440860       1 main.go:227] handling current node
	I1212 23:21:15.453517       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1212 23:21:15.453682       1 main.go:227] handling current node
	I1212 23:21:25.459730       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1212 23:21:25.459802       1 main.go:227] handling current node
	I1212 23:21:35.472727       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1212 23:21:35.472837       1 main.go:227] handling current node
	I1212 23:21:35.472870       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I1212 23:21:35.472888       1 main.go:250] Node multinode-510563-m02 has CIDR [10.244.1.0/24] 
	I1212 23:21:35.473244       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.109 Flags: [] Table: 0} 
	I1212 23:21:45.480950       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1212 23:21:45.481182       1 main.go:227] handling current node
	I1212 23:21:45.481218       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I1212 23:21:45.481244       1 main.go:250] Node multinode-510563-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [0fe05b10bfcd6f6175b47556313838815da9a96a03c510f0440e507fb82c5f96] <==
	* I1212 23:20:33.164255       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:20:33.187331       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 23:20:33.189720       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 23:20:33.189798       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 23:20:33.190216       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:20:33.190282       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:20:33.190309       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:20:33.190335       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:20:33.202829       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 23:20:33.264754       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:20:34.071375       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 23:20:34.076255       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 23:20:34.076299       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:20:34.686975       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:20:34.727173       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:20:34.844961       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 23:20:34.857778       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.38]
	I1212 23:20:34.858882       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 23:20:34.866333       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 23:20:35.176705       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:20:36.240724       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:20:36.258565       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 23:20:36.275314       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:20:48.184939       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1212 23:20:49.059952       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [d24febbad2ba0b2bb214a84438466a95c93679bc8df4106b4f4d4ab5653ef760] <==
	* I1212 23:20:49.350577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="224.089µs"
	I1212 23:20:55.879940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="123.942µs"
	I1212 23:20:55.913391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.199µs"
	I1212 23:20:57.558868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.726µs"
	I1212 23:20:57.620857       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.526699ms"
	I1212 23:20:57.621290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="146.624µs"
	I1212 23:20:58.183866       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1212 23:21:31.629348       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-510563-m02\" does not exist"
	I1212 23:21:31.645410       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-510563-m02" podCIDRs=["10.244.1.0/24"]
	I1212 23:21:31.656717       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5v7sf"
	I1212 23:21:31.664864       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-msx8s"
	I1212 23:21:33.191415       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-510563-m02"
	I1212 23:21:33.191610       1 event.go:307] "Event occurred" object="multinode-510563-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-510563-m02 event: Registered Node multinode-510563-m02 in Controller"
	I1212 23:21:41.975265       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-510563-m02"
	I1212 23:21:44.557858       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1212 23:21:44.579987       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-6hjc6"
	I1212 23:21:44.592923       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-4vnmj"
	I1212 23:21:44.617839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="58.624426ms"
	I1212 23:21:44.636121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="18.19428ms"
	I1212 23:21:44.636354       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="60.882µs"
	I1212 23:21:44.642547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.347µs"
	I1212 23:21:49.437895       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="18.204995ms"
	I1212 23:21:49.438104       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="119.413µs"
	I1212 23:21:50.746279       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.350161ms"
	I1212 23:21:50.746369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.711µs"
	
	* 
	* ==> kube-proxy [f17633bb992c3c03ff3efdb724e554956c6cb9b125ef74c259e8993124e52534] <==
	* I1212 23:20:51.754070       1 server_others.go:69] "Using iptables proxy"
	I1212 23:20:51.772137       1 node.go:141] Successfully retrieved node IP: 192.168.39.38
	I1212 23:20:51.832958       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:20:51.833059       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:20:51.838147       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:20:51.838228       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:20:51.838359       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:20:51.838371       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:20:51.839587       1 config.go:188] "Starting service config controller"
	I1212 23:20:51.839641       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:20:51.839668       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:20:51.839672       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:20:51.841894       1 config.go:315] "Starting node config controller"
	I1212 23:20:51.841938       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:20:51.939736       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 23:20:51.939799       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:20:51.942811       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1009384227c15b35e74fed818350fd0bb27595ebbd39a5d68ba2b7cf9b032705] <==
	* W1212 23:20:33.252112       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 23:20:33.252190       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 23:20:33.252354       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:20:33.252473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 23:20:33.244530       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 23:20:33.253201       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 23:20:34.151403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 23:20:34.151470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 23:20:34.220129       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:20:34.220182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 23:20:34.273390       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:20:34.273438       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:20:34.317593       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 23:20:34.317679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 23:20:34.375802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:20:34.375886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 23:20:34.381233       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:20:34.381280       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:20:34.382412       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:20:34.382479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:20:34.423305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 23:20:34.423573       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 23:20:34.464215       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:20:34.464313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1212 23:20:36.713469       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:20:02 UTC, ends at Tue 2023-12-12 23:21:54 UTC. --
	Dec 12 23:20:50 multinode-510563 kubelet[1268]: E1212 23:20:50.473628    1268 projected.go:198] Error preparing data for projected volume kube-api-access-pmqqj for pod kube-system/kindnet-v4js8: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 23:20:50 multinode-510563 kubelet[1268]: E1212 23:20:50.473737    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a2255be6-8705-40cd-8f35-a3e82906190c-kube-api-access-bwg4r podName:a2255be6-8705-40cd-8f35-a3e82906190c nodeName:}" failed. No retries permitted until 2023-12-12 23:20:50.973719648 +0000 UTC m=+14.759127516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bwg4r" (UniqueName: "kubernetes.io/projected/a2255be6-8705-40cd-8f35-a3e82906190c-kube-api-access-bwg4r") pod "kube-proxy-hspw8" (UID: "a2255be6-8705-40cd-8f35-a3e82906190c") : failed to sync configmap cache: timed out waiting for the condition
	Dec 12 23:20:50 multinode-510563 kubelet[1268]: E1212 23:20:50.474091    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cfe24f85-472c-4ef2-9a48-9e3647cc8feb-kube-api-access-pmqqj podName:cfe24f85-472c-4ef2-9a48-9e3647cc8feb nodeName:}" failed. No retries permitted until 2023-12-12 23:20:50.974072897 +0000 UTC m=+14.759480772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pmqqj" (UniqueName: "kubernetes.io/projected/cfe24f85-472c-4ef2-9a48-9e3647cc8feb-kube-api-access-pmqqj") pod "kindnet-v4js8" (UID: "cfe24f85-472c-4ef2-9a48-9e3647cc8feb") : failed to sync configmap cache: timed out waiting for the condition
	Dec 12 23:20:54 multinode-510563 kubelet[1268]: I1212 23:20:54.544446    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hspw8" podStartSLOduration=5.544374946 podCreationTimestamp="2023-12-12 23:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 23:20:52.534062016 +0000 UTC m=+16.319469887" watchObservedRunningTime="2023-12-12 23:20:54.544374946 +0000 UTC m=+18.329782819"
	Dec 12 23:20:55 multinode-510563 kubelet[1268]: I1212 23:20:55.825889    1268 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 23:20:55 multinode-510563 kubelet[1268]: I1212 23:20:55.864492    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-v4js8" podStartSLOduration=6.864439029 podCreationTimestamp="2023-12-12 23:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 23:20:54.546465324 +0000 UTC m=+18.331873197" watchObservedRunningTime="2023-12-12 23:20:55.864439029 +0000 UTC m=+19.649846902"
	Dec 12 23:20:55 multinode-510563 kubelet[1268]: I1212 23:20:55.864697    1268 topology_manager.go:215] "Topology Admit Handler" podUID="cb4f186a-9bb9-488f-8a74-6e01f352fc05" podNamespace="kube-system" podName="storage-provisioner"
	Dec 12 23:20:55 multinode-510563 kubelet[1268]: I1212 23:20:55.871492    1268 topology_manager.go:215] "Topology Admit Handler" podUID="503de693-19d6-45c5-97c6-3b8e5657bfee" podNamespace="kube-system" podName="coredns-5dd5756b68-zcxks"
	Dec 12 23:20:55 multinode-510563 kubelet[1268]: I1212 23:20:55.918948    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2kxv\" (UniqueName: \"kubernetes.io/projected/503de693-19d6-45c5-97c6-3b8e5657bfee-kube-api-access-x2kxv\") pod \"coredns-5dd5756b68-zcxks\" (UID: \"503de693-19d6-45c5-97c6-3b8e5657bfee\") " pod="kube-system/coredns-5dd5756b68-zcxks"
	Dec 12 23:20:55 multinode-510563 kubelet[1268]: I1212 23:20:55.919074    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/503de693-19d6-45c5-97c6-3b8e5657bfee-config-volume\") pod \"coredns-5dd5756b68-zcxks\" (UID: \"503de693-19d6-45c5-97c6-3b8e5657bfee\") " pod="kube-system/coredns-5dd5756b68-zcxks"
	Dec 12 23:20:55 multinode-510563 kubelet[1268]: I1212 23:20:55.919103    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cb4f186a-9bb9-488f-8a74-6e01f352fc05-tmp\") pod \"storage-provisioner\" (UID: \"cb4f186a-9bb9-488f-8a74-6e01f352fc05\") " pod="kube-system/storage-provisioner"
	Dec 12 23:20:55 multinode-510563 kubelet[1268]: I1212 23:20:55.919122    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knr4n\" (UniqueName: \"kubernetes.io/projected/cb4f186a-9bb9-488f-8a74-6e01f352fc05-kube-api-access-knr4n\") pod \"storage-provisioner\" (UID: \"cb4f186a-9bb9-488f-8a74-6e01f352fc05\") " pod="kube-system/storage-provisioner"
	Dec 12 23:20:57 multinode-510563 kubelet[1268]: I1212 23:20:57.583293    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-zcxks" podStartSLOduration=9.583249234 podCreationTimestamp="2023-12-12 23:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 23:20:57.561864329 +0000 UTC m=+21.347272202" watchObservedRunningTime="2023-12-12 23:20:57.583249234 +0000 UTC m=+21.368657132"
	Dec 12 23:20:57 multinode-510563 kubelet[1268]: I1212 23:20:57.603583    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=8.603535136 podCreationTimestamp="2023-12-12 23:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 23:20:57.585258216 +0000 UTC m=+21.370666084" watchObservedRunningTime="2023-12-12 23:20:57.603535136 +0000 UTC m=+21.388943042"
	Dec 12 23:21:36 multinode-510563 kubelet[1268]: E1212 23:21:36.451734    1268 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:21:36 multinode-510563 kubelet[1268]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:21:36 multinode-510563 kubelet[1268]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:21:36 multinode-510563 kubelet[1268]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:21:44 multinode-510563 kubelet[1268]: I1212 23:21:44.606580    1268 topology_manager.go:215] "Topology Admit Handler" podUID="00d42ae1-e3c5-461d-9019-b5609191598e" podNamespace="default" podName="busybox-5bc68d56bd-4vnmj"
	Dec 12 23:21:44 multinode-510563 kubelet[1268]: W1212 23:21:44.614873    1268 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-510563" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-510563' and this object
	Dec 12 23:21:44 multinode-510563 kubelet[1268]: E1212 23:21:44.614973    1268 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-510563" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-510563' and this object
	Dec 12 23:21:44 multinode-510563 kubelet[1268]: I1212 23:21:44.701218    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjwzt\" (UniqueName: \"kubernetes.io/projected/00d42ae1-e3c5-461d-9019-b5609191598e-kube-api-access-jjwzt\") pod \"busybox-5bc68d56bd-4vnmj\" (UID: \"00d42ae1-e3c5-461d-9019-b5609191598e\") " pod="default/busybox-5bc68d56bd-4vnmj"
	Dec 12 23:21:45 multinode-510563 kubelet[1268]: E1212 23:21:45.809321    1268 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 23:21:45 multinode-510563 kubelet[1268]: E1212 23:21:45.809403    1268 projected.go:198] Error preparing data for projected volume kube-api-access-jjwzt for pod default/busybox-5bc68d56bd-4vnmj: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 23:21:45 multinode-510563 kubelet[1268]: E1212 23:21:45.809616    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00d42ae1-e3c5-461d-9019-b5609191598e-kube-api-access-jjwzt podName:00d42ae1-e3c5-461d-9019-b5609191598e nodeName:}" failed. No retries permitted until 2023-12-12 23:21:46.309551647 +0000 UTC m=+70.094959502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jjwzt" (UniqueName: "kubernetes.io/projected/00d42ae1-e3c5-461d-9019-b5609191598e-kube-api-access-jjwzt") pod "busybox-5bc68d56bd-4vnmj" (UID: "00d42ae1-e3c5-461d-9019-b5609191598e") : failed to sync configmap cache: timed out waiting for the condition
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-510563 -n multinode-510563
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-510563 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (688.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-510563
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-510563
E1212 23:24:27.617243  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:25:11.805016  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-510563: exit status 82 (2m1.139986171s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-510563"  ...
	* Stopping node "multinode-510563"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-510563" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-510563 --wait=true -v=8 --alsologtostderr
E1212 23:26:34.849654  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:27:45.321252  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:29:27.617477  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:30:11.804614  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:30:50.662745  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:32:45.322921  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:34:08.499956  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:34:27.616986  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-510563 --wait=true -v=8 --alsologtostderr: (9m24.319521624s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-510563
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-510563 -n multinode-510563
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-510563 logs -n 25: (1.607935726s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-510563 ssh -n                                                                 | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-510563 cp multinode-510563-m02:/home/docker/cp-test.txt                       | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1537792593/001/cp-test_multinode-510563-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n                                                                 | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-510563 cp multinode-510563-m02:/home/docker/cp-test.txt                       | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563:/home/docker/cp-test_multinode-510563-m02_multinode-510563.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n                                                                 | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n multinode-510563 sudo cat                                       | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | /home/docker/cp-test_multinode-510563-m02_multinode-510563.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-510563 cp multinode-510563-m02:/home/docker/cp-test.txt                       | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m03:/home/docker/cp-test_multinode-510563-m02_multinode-510563-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n                                                                 | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n multinode-510563-m03 sudo cat                                   | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | /home/docker/cp-test_multinode-510563-m02_multinode-510563-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-510563 cp testdata/cp-test.txt                                                | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n                                                                 | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-510563 cp multinode-510563-m03:/home/docker/cp-test.txt                       | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1537792593/001/cp-test_multinode-510563-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n                                                                 | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-510563 cp multinode-510563-m03:/home/docker/cp-test.txt                       | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563:/home/docker/cp-test_multinode-510563-m03_multinode-510563.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n                                                                 | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n multinode-510563 sudo cat                                       | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | /home/docker/cp-test_multinode-510563-m03_multinode-510563.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-510563 cp multinode-510563-m03:/home/docker/cp-test.txt                       | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m02:/home/docker/cp-test_multinode-510563-m03_multinode-510563-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n                                                                 | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n multinode-510563-m02 sudo cat                                   | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | /home/docker/cp-test_multinode-510563-m03_multinode-510563-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-510563 node stop m03                                                          | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	| node    | multinode-510563 node start                                                             | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:23 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-510563                                                                | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:23 UTC |                     |
	| stop    | -p multinode-510563                                                                     | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:23 UTC |                     |
	| start   | -p multinode-510563                                                                     | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:25 UTC | 12 Dec 23 23:34 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-510563                                                                | multinode-510563 | jenkins | v1.32.0 | 12 Dec 23 23:34 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:25:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:25:24.847551  160181 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:25:24.847808  160181 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:25:24.847817  160181 out.go:309] Setting ErrFile to fd 2...
	I1212 23:25:24.847821  160181 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:25:24.848005  160181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1212 23:25:24.848592  160181 out.go:303] Setting JSON to false
	I1212 23:25:24.849554  160181 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7673,"bootTime":1702415852,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:25:24.849618  160181 start.go:138] virtualization: kvm guest
	I1212 23:25:24.851845  160181 out.go:177] * [multinode-510563] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:25:24.853266  160181 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 23:25:24.854486  160181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:25:24.853305  160181 notify.go:220] Checking for updates...
	I1212 23:25:24.856875  160181 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:25:24.858110  160181 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:25:24.859366  160181 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:25:24.860631  160181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:25:24.862332  160181 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:25:24.862417  160181 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:25:24.862788  160181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:25:24.862843  160181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:25:24.877049  160181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40985
	I1212 23:25:24.877501  160181 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:25:24.878003  160181 main.go:141] libmachine: Using API Version  1
	I1212 23:25:24.878018  160181 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:25:24.878346  160181 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:25:24.878546  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:25:24.912795  160181 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 23:25:24.913925  160181 start.go:298] selected driver: kvm2
	I1212 23:25:24.913936  160181 start.go:902] validating driver "kvm2" against &{Name:multinode-510563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.133 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:25:24.914080  160181 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:25:24.914379  160181 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:25:24.914437  160181 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:25:24.929128  160181 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:25:24.929833  160181 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:25:24.929891  160181 cni.go:84] Creating CNI manager for ""
	I1212 23:25:24.929902  160181 cni.go:136] 3 nodes found, recommending kindnet
	I1212 23:25:24.929910  160181 start_flags.go:323] config:
	{Name:multinode-510563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-510563 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.133 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:25:24.930118  160181 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:25:24.932116  160181 out.go:177] * Starting control plane node multinode-510563 in cluster multinode-510563
	I1212 23:25:24.933514  160181 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:25:24.933554  160181 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 23:25:24.933566  160181 cache.go:56] Caching tarball of preloaded images
	I1212 23:25:24.933697  160181 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:25:24.933740  160181 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 23:25:24.933920  160181 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/config.json ...
	I1212 23:25:24.934144  160181 start.go:365] acquiring machines lock for multinode-510563: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:25:24.934194  160181 start.go:369] acquired machines lock for "multinode-510563" in 27.994µs
	I1212 23:25:24.934209  160181 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:25:24.934214  160181 fix.go:54] fixHost starting: 
	I1212 23:25:24.934470  160181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:25:24.934507  160181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:25:24.949105  160181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1212 23:25:24.949531  160181 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:25:24.949946  160181 main.go:141] libmachine: Using API Version  1
	I1212 23:25:24.949971  160181 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:25:24.950274  160181 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:25:24.950443  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:25:24.950592  160181 main.go:141] libmachine: (multinode-510563) Calling .GetState
	I1212 23:25:24.952107  160181 fix.go:102] recreateIfNeeded on multinode-510563: state=Running err=<nil>
	W1212 23:25:24.952126  160181 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:25:24.954164  160181 out.go:177] * Updating the running kvm2 "multinode-510563" VM ...
	I1212 23:25:24.955437  160181 machine.go:88] provisioning docker machine ...
	I1212 23:25:24.955456  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:25:24.955672  160181 main.go:141] libmachine: (multinode-510563) Calling .GetMachineName
	I1212 23:25:24.955837  160181 buildroot.go:166] provisioning hostname "multinode-510563"
	I1212 23:25:24.955873  160181 main.go:141] libmachine: (multinode-510563) Calling .GetMachineName
	I1212 23:25:24.956012  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:25:24.958463  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:25:24.958932  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:25:24.958959  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:25:24.959140  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:25:24.959315  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:25:24.959460  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:25:24.959588  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:25:24.959724  160181 main.go:141] libmachine: Using SSH client type: native
	I1212 23:25:24.960205  160181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1212 23:25:24.960228  160181 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-510563 && echo "multinode-510563" | sudo tee /etc/hostname
	I1212 23:25:43.364733  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:25:49.444722  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:25:52.516787  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:25:58.596758  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:26:01.668815  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:26:07.748754  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:26:10.820700  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:26:16.900736  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:26:19.972692  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:26:26.052693  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:26:29.124720  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:26:35.204806  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:26:38.276746  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:26:44.356775  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:26:47.428802  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:26:53.508733  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:26:56.580686  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:27:02.661215  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:27:05.732738  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:27:11.812709  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:27:14.884617  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:27:20.964684  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:27:24.036674  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:27:30.116710  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:27:33.188809  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:27:39.268737  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:27:42.340698  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:27:48.420719  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:27:51.496652  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:27:57.572795  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:28:00.644758  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:28:06.724749  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:28:09.796708  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:28:15.876806  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:28:18.948878  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:28:25.028679  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:28:28.100661  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:28:34.180747  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:28:37.252756  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:28:43.332766  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:28:46.404688  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:28:52.484732  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:28:55.556765  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:01.636694  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:04.708696  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:10.788723  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:13.860710  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:19.940765  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:23.012899  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:29.092792  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:32.164757  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:38.244738  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:41.316785  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:47.396805  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:50.468903  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:56.548676  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:29:59.620673  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:30:05.700747  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:30:08.772664  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:30:14.852645  160181 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I1212 23:30:17.854685  160181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:30:17.854722  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:30:17.856844  160181 machine.go:91] provisioned docker machine in 4m52.901388146s
	I1212 23:30:17.856882  160181 fix.go:56] fixHost completed within 4m52.922668507s
	I1212 23:30:17.856887  160181 start.go:83] releasing machines lock for "multinode-510563", held for 4m52.922684507s
	W1212 23:30:17.856902  160181 start.go:694] error starting host: provision: host is not running
	W1212 23:30:17.857006  160181 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1212 23:30:17.857015  160181 start.go:709] Will try again in 5 seconds ...
	I1212 23:30:22.859164  160181 start.go:365] acquiring machines lock for multinode-510563: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:30:22.859266  160181 start.go:369] acquired machines lock for "multinode-510563" in 62.353µs
	I1212 23:30:22.859286  160181 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:30:22.859292  160181 fix.go:54] fixHost starting: 
	I1212 23:30:22.859603  160181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:30:22.859625  160181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:30:22.875034  160181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I1212 23:30:22.875478  160181 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:30:22.876064  160181 main.go:141] libmachine: Using API Version  1
	I1212 23:30:22.876095  160181 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:30:22.876406  160181 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:30:22.876623  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:30:22.876763  160181 main.go:141] libmachine: (multinode-510563) Calling .GetState
	I1212 23:30:22.878510  160181 fix.go:102] recreateIfNeeded on multinode-510563: state=Stopped err=<nil>
	I1212 23:30:22.878539  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	W1212 23:30:22.878729  160181 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:30:22.880705  160181 out.go:177] * Restarting existing kvm2 VM for "multinode-510563" ...
	I1212 23:30:22.881828  160181 main.go:141] libmachine: (multinode-510563) Calling .Start
	I1212 23:30:22.881983  160181 main.go:141] libmachine: (multinode-510563) Ensuring networks are active...
	I1212 23:30:22.882708  160181 main.go:141] libmachine: (multinode-510563) Ensuring network default is active
	I1212 23:30:22.882973  160181 main.go:141] libmachine: (multinode-510563) Ensuring network mk-multinode-510563 is active
	I1212 23:30:22.883327  160181 main.go:141] libmachine: (multinode-510563) Getting domain xml...
	I1212 23:30:22.883989  160181 main.go:141] libmachine: (multinode-510563) Creating domain...
	I1212 23:30:24.101647  160181 main.go:141] libmachine: (multinode-510563) Waiting to get IP...
	I1212 23:30:24.102706  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:24.103141  160181 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:30:24.103208  160181 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:30:24.103129  160985 retry.go:31] will retry after 286.218337ms: waiting for machine to come up
	I1212 23:30:24.390821  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:24.391341  160181 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:30:24.391374  160181 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:30:24.391290  160985 retry.go:31] will retry after 385.263894ms: waiting for machine to come up
	I1212 23:30:24.778785  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:24.779280  160181 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:30:24.779306  160181 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:30:24.779226  160985 retry.go:31] will retry after 305.057691ms: waiting for machine to come up
	I1212 23:30:25.085635  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:25.086163  160181 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:30:25.086189  160181 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:30:25.086109  160985 retry.go:31] will retry after 393.13477ms: waiting for machine to come up
	I1212 23:30:25.480781  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:25.481278  160181 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:30:25.481310  160181 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:30:25.481235  160985 retry.go:31] will retry after 697.800853ms: waiting for machine to come up
	I1212 23:30:26.180212  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:26.180644  160181 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:30:26.180670  160181 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:30:26.180589  160985 retry.go:31] will retry after 797.645306ms: waiting for machine to come up
	I1212 23:30:26.979542  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:26.979982  160181 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:30:26.980011  160181 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:30:26.979938  160985 retry.go:31] will retry after 732.512855ms: waiting for machine to come up
	I1212 23:30:27.713581  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:27.713997  160181 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:30:27.714020  160181 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:30:27.713953  160985 retry.go:31] will retry after 1.425216526s: waiting for machine to come up
	I1212 23:30:29.141089  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:29.141548  160181 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:30:29.141574  160181 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:30:29.141515  160985 retry.go:31] will retry after 1.453882814s: waiting for machine to come up
	I1212 23:30:30.597214  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:30.597619  160181 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:30:30.597642  160181 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:30:30.597574  160985 retry.go:31] will retry after 1.653086369s: waiting for machine to come up
	I1212 23:30:32.253187  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:32.253570  160181 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:30:32.253601  160181 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:30:32.253511  160985 retry.go:31] will retry after 2.195500884s: waiting for machine to come up
	I1212 23:30:34.450876  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:34.451324  160181 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:30:34.451351  160181 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:30:34.451275  160985 retry.go:31] will retry after 2.251154003s: waiting for machine to come up
	I1212 23:30:36.705747  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:36.706190  160181 main.go:141] libmachine: (multinode-510563) DBG | unable to find current IP address of domain multinode-510563 in network mk-multinode-510563
	I1212 23:30:36.706223  160181 main.go:141] libmachine: (multinode-510563) DBG | I1212 23:30:36.706136  160985 retry.go:31] will retry after 4.513002354s: waiting for machine to come up
	I1212 23:30:41.222056  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.222573  160181 main.go:141] libmachine: (multinode-510563) Found IP for machine: 192.168.39.38
	I1212 23:30:41.222607  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has current primary IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.222617  160181 main.go:141] libmachine: (multinode-510563) Reserving static IP address...
	I1212 23:30:41.223045  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "multinode-510563", mac: "52:54:00:2d:9f:26", ip: "192.168.39.38"} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:41.223075  160181 main.go:141] libmachine: (multinode-510563) Reserved static IP address: 192.168.39.38
	I1212 23:30:41.223099  160181 main.go:141] libmachine: (multinode-510563) DBG | skip adding static IP to network mk-multinode-510563 - found existing host DHCP lease matching {name: "multinode-510563", mac: "52:54:00:2d:9f:26", ip: "192.168.39.38"}
	I1212 23:30:41.223116  160181 main.go:141] libmachine: (multinode-510563) Waiting for SSH to be available...
	I1212 23:30:41.223136  160181 main.go:141] libmachine: (multinode-510563) DBG | Getting to WaitForSSH function...
	I1212 23:30:41.225295  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.225649  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:41.225682  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.225764  160181 main.go:141] libmachine: (multinode-510563) DBG | Using SSH client type: external
	I1212 23:30:41.225793  160181 main.go:141] libmachine: (multinode-510563) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa (-rw-------)
	I1212 23:30:41.225828  160181 main.go:141] libmachine: (multinode-510563) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:30:41.225847  160181 main.go:141] libmachine: (multinode-510563) DBG | About to run SSH command:
	I1212 23:30:41.225861  160181 main.go:141] libmachine: (multinode-510563) DBG | exit 0
	I1212 23:30:41.312118  160181 main.go:141] libmachine: (multinode-510563) DBG | SSH cmd err, output: <nil>: 
	I1212 23:30:41.312525  160181 main.go:141] libmachine: (multinode-510563) Calling .GetConfigRaw
	I1212 23:30:41.313113  160181 main.go:141] libmachine: (multinode-510563) Calling .GetIP
	I1212 23:30:41.315406  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.315793  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:41.315833  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.316072  160181 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/config.json ...
	I1212 23:30:41.316279  160181 machine.go:88] provisioning docker machine ...
	I1212 23:30:41.316309  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:30:41.316495  160181 main.go:141] libmachine: (multinode-510563) Calling .GetMachineName
	I1212 23:30:41.316639  160181 buildroot.go:166] provisioning hostname "multinode-510563"
	I1212 23:30:41.316668  160181 main.go:141] libmachine: (multinode-510563) Calling .GetMachineName
	I1212 23:30:41.316798  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:30:41.318922  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.319222  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:41.319248  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.319390  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:30:41.319564  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:30:41.319714  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:30:41.319851  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:30:41.320051  160181 main.go:141] libmachine: Using SSH client type: native
	I1212 23:30:41.320369  160181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1212 23:30:41.320385  160181 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-510563 && echo "multinode-510563" | sudo tee /etc/hostname
	I1212 23:30:41.445147  160181 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-510563
	
	I1212 23:30:41.445172  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:30:41.448097  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.448448  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:41.448476  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.448637  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:30:41.448855  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:30:41.449013  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:30:41.449129  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:30:41.449303  160181 main.go:141] libmachine: Using SSH client type: native
	I1212 23:30:41.449832  160181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1212 23:30:41.449861  160181 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-510563' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-510563/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-510563' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:30:41.568303  160181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:30:41.568338  160181 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1212 23:30:41.568359  160181 buildroot.go:174] setting up certificates
	I1212 23:30:41.568369  160181 provision.go:83] configureAuth start
	I1212 23:30:41.568378  160181 main.go:141] libmachine: (multinode-510563) Calling .GetMachineName
	I1212 23:30:41.568688  160181 main.go:141] libmachine: (multinode-510563) Calling .GetIP
	I1212 23:30:41.571468  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.571907  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:41.571932  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.572109  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:30:41.574240  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.574587  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:41.574622  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.574762  160181 provision.go:138] copyHostCerts
	I1212 23:30:41.574791  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:30:41.574827  160181 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1212 23:30:41.574838  160181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:30:41.574917  160181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1212 23:30:41.575012  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:30:41.575032  160181 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1212 23:30:41.575039  160181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:30:41.575063  160181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1212 23:30:41.575119  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:30:41.575137  160181 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1212 23:30:41.575141  160181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:30:41.575162  160181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1212 23:30:41.575215  160181 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.multinode-510563 san=[192.168.39.38 192.168.39.38 localhost 127.0.0.1 minikube multinode-510563]
	I1212 23:30:41.705005  160181 provision.go:172] copyRemoteCerts
	I1212 23:30:41.705073  160181 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:30:41.705096  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:30:41.707947  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.708288  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:41.708320  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.708536  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:30:41.708718  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:30:41.708877  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:30:41.709008  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:30:41.793972  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 23:30:41.794045  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:30:41.815357  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 23:30:41.815434  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:30:41.837164  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 23:30:41.837217  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 23:30:41.858215  160181 provision.go:86] duration metric: configureAuth took 289.834644ms
	I1212 23:30:41.858240  160181 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:30:41.858438  160181 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:30:41.858525  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:30:41.861438  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.861783  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:41.861803  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:41.862060  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:30:41.862252  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:30:41.862435  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:30:41.862603  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:30:41.862849  160181 main.go:141] libmachine: Using SSH client type: native
	I1212 23:30:41.863320  160181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1212 23:30:41.863344  160181 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:30:42.180427  160181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:30:42.180477  160181 machine.go:91] provisioned docker machine in 864.177497ms
	I1212 23:30:42.180489  160181 start.go:300] post-start starting for "multinode-510563" (driver="kvm2")
	I1212 23:30:42.180502  160181 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:30:42.180523  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:30:42.180873  160181 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:30:42.180898  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:30:42.183483  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:42.183974  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:42.184004  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:42.184140  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:30:42.184332  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:30:42.184490  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:30:42.184632  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:30:42.271527  160181 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:30:42.275973  160181 command_runner.go:130] > NAME=Buildroot
	I1212 23:30:42.275995  160181 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 23:30:42.276000  160181 command_runner.go:130] > ID=buildroot
	I1212 23:30:42.276006  160181 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:30:42.276010  160181 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:30:42.276039  160181 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:30:42.276050  160181 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1212 23:30:42.276108  160181 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1212 23:30:42.276175  160181 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1212 23:30:42.276185  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> /etc/ssl/certs/1435412.pem
	I1212 23:30:42.276260  160181 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:30:42.285821  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:30:42.312058  160181 start.go:303] post-start completed in 131.550201ms
	I1212 23:30:42.312084  160181 fix.go:56] fixHost completed within 19.452792719s
	I1212 23:30:42.312104  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:30:42.314737  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:42.315084  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:42.315113  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:42.315306  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:30:42.315502  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:30:42.315666  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:30:42.315807  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:30:42.315995  160181 main.go:141] libmachine: Using SSH client type: native
	I1212 23:30:42.316306  160181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1212 23:30:42.316318  160181 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:30:42.429062  160181 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423842.379275497
	
	I1212 23:30:42.429091  160181 fix.go:206] guest clock: 1702423842.379275497
	I1212 23:30:42.429106  160181 fix.go:219] Guest: 2023-12-12 23:30:42.379275497 +0000 UTC Remote: 2023-12-12 23:30:42.3120879 +0000 UTC m=+317.514818413 (delta=67.187597ms)
	I1212 23:30:42.429131  160181 fix.go:190] guest clock delta is within tolerance: 67.187597ms
	I1212 23:30:42.429138  160181 start.go:83] releasing machines lock for "multinode-510563", held for 19.569863767s
	I1212 23:30:42.429163  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:30:42.429450  160181 main.go:141] libmachine: (multinode-510563) Calling .GetIP
	I1212 23:30:42.432007  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:42.432357  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:42.432386  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:42.432547  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:30:42.433044  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:30:42.433247  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:30:42.433327  160181 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:30:42.433380  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:30:42.433490  160181 ssh_runner.go:195] Run: cat /version.json
	I1212 23:30:42.433520  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:30:42.435731  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:42.436075  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:42.436111  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:42.436130  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:42.436250  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:30:42.436559  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:42.436581  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:42.436594  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:30:42.436706  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:30:42.436783  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:30:42.436856  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:30:42.436913  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:30:42.436967  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:30:42.437101  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:30:42.541002  160181 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:30:42.541866  160181 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1212 23:30:42.542048  160181 ssh_runner.go:195] Run: systemctl --version
	I1212 23:30:42.547806  160181 command_runner.go:130] > systemd 247 (247)
	I1212 23:30:42.547832  160181 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 23:30:42.548146  160181 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:30:42.707372  160181 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:30:42.713875  160181 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 23:30:42.714215  160181 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:30:42.714287  160181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:30:42.732296  160181 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 23:30:42.732481  160181 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:30:42.732499  160181 start.go:475] detecting cgroup driver to use...
	I1212 23:30:42.732555  160181 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:30:42.748786  160181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:30:42.763633  160181 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:30:42.763695  160181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:30:42.776829  160181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:30:42.792023  160181 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:30:42.908551  160181 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1212 23:30:42.908925  160181 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:30:43.031194  160181 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1212 23:30:43.031242  160181 docker.go:219] disabling docker service ...
	I1212 23:30:43.031294  160181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:30:43.044589  160181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:30:43.055774  160181 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1212 23:30:43.056340  160181 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:30:43.069403  160181 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1212 23:30:43.173131  160181 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:30:43.185314  160181 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1212 23:30:43.185343  160181 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1212 23:30:43.276101  160181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:30:43.288549  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:30:43.305507  160181 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 23:30:43.305690  160181 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:30:43.305759  160181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:30:43.315126  160181 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:30:43.315214  160181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:30:43.324561  160181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:30:43.333679  160181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:30:43.343685  160181 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:30:43.353669  160181 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:30:43.361885  160181 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:30:43.361932  160181 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:30:43.361989  160181 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:30:43.375316  160181 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:30:43.383798  160181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:30:43.483853  160181 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:30:43.649657  160181 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:30:43.649746  160181 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:30:43.654493  160181 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 23:30:43.654519  160181 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 23:30:43.654529  160181 command_runner.go:130] > Device: 16h/22d	Inode: 825         Links: 1
	I1212 23:30:43.654540  160181 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:30:43.654547  160181 command_runner.go:130] > Access: 2023-12-12 23:30:43.585189212 +0000
	I1212 23:30:43.654568  160181 command_runner.go:130] > Modify: 2023-12-12 23:30:43.585189212 +0000
	I1212 23:30:43.654573  160181 command_runner.go:130] > Change: 2023-12-12 23:30:43.585189212 +0000
	I1212 23:30:43.654577  160181 command_runner.go:130] >  Birth: -
	I1212 23:30:43.654633  160181 start.go:543] Will wait 60s for crictl version
	I1212 23:30:43.654694  160181 ssh_runner.go:195] Run: which crictl
	I1212 23:30:43.658424  160181 command_runner.go:130] > /usr/bin/crictl
	I1212 23:30:43.658739  160181 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:30:43.694325  160181 command_runner.go:130] > Version:  0.1.0
	I1212 23:30:43.694347  160181 command_runner.go:130] > RuntimeName:  cri-o
	I1212 23:30:43.694352  160181 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 23:30:43.694362  160181 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 23:30:43.694388  160181 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:30:43.694478  160181 ssh_runner.go:195] Run: crio --version
	I1212 23:30:43.739332  160181 command_runner.go:130] > crio version 1.24.1
	I1212 23:30:43.739356  160181 command_runner.go:130] > Version:          1.24.1
	I1212 23:30:43.739366  160181 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 23:30:43.739373  160181 command_runner.go:130] > GitTreeState:     dirty
	I1212 23:30:43.739382  160181 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 23:30:43.739410  160181 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 23:30:43.739416  160181 command_runner.go:130] > Compiler:         gc
	I1212 23:30:43.739424  160181 command_runner.go:130] > Platform:         linux/amd64
	I1212 23:30:43.739432  160181 command_runner.go:130] > Linkmode:         dynamic
	I1212 23:30:43.739444  160181 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 23:30:43.739452  160181 command_runner.go:130] > SeccompEnabled:   true
	I1212 23:30:43.739459  160181 command_runner.go:130] > AppArmorEnabled:  false
	I1212 23:30:43.740731  160181 ssh_runner.go:195] Run: crio --version
	I1212 23:30:43.791742  160181 command_runner.go:130] > crio version 1.24.1
	I1212 23:30:43.791769  160181 command_runner.go:130] > Version:          1.24.1
	I1212 23:30:43.791778  160181 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 23:30:43.791784  160181 command_runner.go:130] > GitTreeState:     dirty
	I1212 23:30:43.791791  160181 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 23:30:43.791798  160181 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 23:30:43.791805  160181 command_runner.go:130] > Compiler:         gc
	I1212 23:30:43.791813  160181 command_runner.go:130] > Platform:         linux/amd64
	I1212 23:30:43.791822  160181 command_runner.go:130] > Linkmode:         dynamic
	I1212 23:30:43.791836  160181 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 23:30:43.791847  160181 command_runner.go:130] > SeccompEnabled:   true
	I1212 23:30:43.791856  160181 command_runner.go:130] > AppArmorEnabled:  false
	I1212 23:30:43.796794  160181 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:30:43.798156  160181 main.go:141] libmachine: (multinode-510563) Calling .GetIP
	I1212 23:30:43.801040  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:43.801393  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:30:43.801417  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:30:43.801619  160181 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 23:30:43.806260  160181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:30:43.819233  160181 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:30:43.819285  160181 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:30:43.858449  160181 command_runner.go:130] > {
	I1212 23:30:43.858471  160181 command_runner.go:130] >   "images": [
	I1212 23:30:43.858477  160181 command_runner.go:130] >     {
	I1212 23:30:43.858489  160181 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1212 23:30:43.858496  160181 command_runner.go:130] >       "repoTags": [
	I1212 23:30:43.858504  160181 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1212 23:30:43.858509  160181 command_runner.go:130] >       ],
	I1212 23:30:43.858516  160181 command_runner.go:130] >       "repoDigests": [
	I1212 23:30:43.858531  160181 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1212 23:30:43.858554  160181 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1212 23:30:43.858571  160181 command_runner.go:130] >       ],
	I1212 23:30:43.858581  160181 command_runner.go:130] >       "size": "750414",
	I1212 23:30:43.858590  160181 command_runner.go:130] >       "uid": {
	I1212 23:30:43.858600  160181 command_runner.go:130] >         "value": "65535"
	I1212 23:30:43.858609  160181 command_runner.go:130] >       },
	I1212 23:30:43.858619  160181 command_runner.go:130] >       "username": "",
	I1212 23:30:43.858644  160181 command_runner.go:130] >       "spec": null,
	I1212 23:30:43.858659  160181 command_runner.go:130] >       "pinned": false
	I1212 23:30:43.858669  160181 command_runner.go:130] >     }
	I1212 23:30:43.858677  160181 command_runner.go:130] >   ]
	I1212 23:30:43.858683  160181 command_runner.go:130] > }
	I1212 23:30:43.858825  160181 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 23:30:43.858887  160181 ssh_runner.go:195] Run: which lz4
	I1212 23:30:43.862684  160181 command_runner.go:130] > /usr/bin/lz4
	I1212 23:30:43.862944  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 23:30:43.863053  160181 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:30:43.867084  160181 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:30:43.867267  160181 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:30:43.867298  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 23:30:45.718841  160181 crio.go:444] Took 1.855829 seconds to copy over tarball
	I1212 23:30:45.718962  160181 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:30:48.611167  160181 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.892169383s)
	I1212 23:30:48.611211  160181 crio.go:451] Took 2.892334 seconds to extract the tarball
	I1212 23:30:48.611224  160181 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:30:48.653316  160181 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:30:48.701752  160181 command_runner.go:130] > {
	I1212 23:30:48.701778  160181 command_runner.go:130] >   "images": [
	I1212 23:30:48.701785  160181 command_runner.go:130] >     {
	I1212 23:30:48.701806  160181 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1212 23:30:48.701814  160181 command_runner.go:130] >       "repoTags": [
	I1212 23:30:48.701824  160181 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1212 23:30:48.701831  160181 command_runner.go:130] >       ],
	I1212 23:30:48.701838  160181 command_runner.go:130] >       "repoDigests": [
	I1212 23:30:48.701857  160181 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1212 23:30:48.701872  160181 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1212 23:30:48.701883  160181 command_runner.go:130] >       ],
	I1212 23:30:48.701890  160181 command_runner.go:130] >       "size": "65258016",
	I1212 23:30:48.701897  160181 command_runner.go:130] >       "uid": null,
	I1212 23:30:48.701904  160181 command_runner.go:130] >       "username": "",
	I1212 23:30:48.701918  160181 command_runner.go:130] >       "spec": null,
	I1212 23:30:48.701927  160181 command_runner.go:130] >       "pinned": false
	I1212 23:30:48.701936  160181 command_runner.go:130] >     },
	I1212 23:30:48.701945  160181 command_runner.go:130] >     {
	I1212 23:30:48.701955  160181 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1212 23:30:48.701964  160181 command_runner.go:130] >       "repoTags": [
	I1212 23:30:48.701972  160181 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 23:30:48.701981  160181 command_runner.go:130] >       ],
	I1212 23:30:48.701991  160181 command_runner.go:130] >       "repoDigests": [
	I1212 23:30:48.702007  160181 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1212 23:30:48.702023  160181 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1212 23:30:48.702032  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702046  160181 command_runner.go:130] >       "size": "31470524",
	I1212 23:30:48.702052  160181 command_runner.go:130] >       "uid": null,
	I1212 23:30:48.702057  160181 command_runner.go:130] >       "username": "",
	I1212 23:30:48.702064  160181 command_runner.go:130] >       "spec": null,
	I1212 23:30:48.702068  160181 command_runner.go:130] >       "pinned": false
	I1212 23:30:48.702075  160181 command_runner.go:130] >     },
	I1212 23:30:48.702079  160181 command_runner.go:130] >     {
	I1212 23:30:48.702087  160181 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1212 23:30:48.702093  160181 command_runner.go:130] >       "repoTags": [
	I1212 23:30:48.702099  160181 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1212 23:30:48.702105  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702109  160181 command_runner.go:130] >       "repoDigests": [
	I1212 23:30:48.702118  160181 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1212 23:30:48.702131  160181 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1212 23:30:48.702140  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702150  160181 command_runner.go:130] >       "size": "53621675",
	I1212 23:30:48.702160  160181 command_runner.go:130] >       "uid": null,
	I1212 23:30:48.702170  160181 command_runner.go:130] >       "username": "",
	I1212 23:30:48.702183  160181 command_runner.go:130] >       "spec": null,
	I1212 23:30:48.702193  160181 command_runner.go:130] >       "pinned": false
	I1212 23:30:48.702202  160181 command_runner.go:130] >     },
	I1212 23:30:48.702210  160181 command_runner.go:130] >     {
	I1212 23:30:48.702221  160181 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1212 23:30:48.702230  160181 command_runner.go:130] >       "repoTags": [
	I1212 23:30:48.702241  160181 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1212 23:30:48.702250  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702260  160181 command_runner.go:130] >       "repoDigests": [
	I1212 23:30:48.702275  160181 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1212 23:30:48.702289  160181 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1212 23:30:48.702306  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702313  160181 command_runner.go:130] >       "size": "295456551",
	I1212 23:30:48.702317  160181 command_runner.go:130] >       "uid": {
	I1212 23:30:48.702322  160181 command_runner.go:130] >         "value": "0"
	I1212 23:30:48.702326  160181 command_runner.go:130] >       },
	I1212 23:30:48.702332  160181 command_runner.go:130] >       "username": "",
	I1212 23:30:48.702336  160181 command_runner.go:130] >       "spec": null,
	I1212 23:30:48.702346  160181 command_runner.go:130] >       "pinned": false
	I1212 23:30:48.702352  160181 command_runner.go:130] >     },
	I1212 23:30:48.702370  160181 command_runner.go:130] >     {
	I1212 23:30:48.702385  160181 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1212 23:30:48.702390  160181 command_runner.go:130] >       "repoTags": [
	I1212 23:30:48.702395  160181 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1212 23:30:48.702401  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702405  160181 command_runner.go:130] >       "repoDigests": [
	I1212 23:30:48.702415  160181 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1212 23:30:48.702424  160181 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1212 23:30:48.702429  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702434  160181 command_runner.go:130] >       "size": "127226832",
	I1212 23:30:48.702440  160181 command_runner.go:130] >       "uid": {
	I1212 23:30:48.702445  160181 command_runner.go:130] >         "value": "0"
	I1212 23:30:48.702450  160181 command_runner.go:130] >       },
	I1212 23:30:48.702455  160181 command_runner.go:130] >       "username": "",
	I1212 23:30:48.702461  160181 command_runner.go:130] >       "spec": null,
	I1212 23:30:48.702465  160181 command_runner.go:130] >       "pinned": false
	I1212 23:30:48.702473  160181 command_runner.go:130] >     },
	I1212 23:30:48.702477  160181 command_runner.go:130] >     {
	I1212 23:30:48.702484  160181 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1212 23:30:48.702491  160181 command_runner.go:130] >       "repoTags": [
	I1212 23:30:48.702496  160181 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1212 23:30:48.702500  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702507  160181 command_runner.go:130] >       "repoDigests": [
	I1212 23:30:48.702514  160181 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1212 23:30:48.702524  160181 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1212 23:30:48.702530  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702534  160181 command_runner.go:130] >       "size": "123261750",
	I1212 23:30:48.702540  160181 command_runner.go:130] >       "uid": {
	I1212 23:30:48.702545  160181 command_runner.go:130] >         "value": "0"
	I1212 23:30:48.702550  160181 command_runner.go:130] >       },
	I1212 23:30:48.702554  160181 command_runner.go:130] >       "username": "",
	I1212 23:30:48.702558  160181 command_runner.go:130] >       "spec": null,
	I1212 23:30:48.702565  160181 command_runner.go:130] >       "pinned": false
	I1212 23:30:48.702568  160181 command_runner.go:130] >     },
	I1212 23:30:48.702581  160181 command_runner.go:130] >     {
	I1212 23:30:48.702590  160181 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1212 23:30:48.702596  160181 command_runner.go:130] >       "repoTags": [
	I1212 23:30:48.702601  160181 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1212 23:30:48.702607  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702611  160181 command_runner.go:130] >       "repoDigests": [
	I1212 23:30:48.702621  160181 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1212 23:30:48.702630  160181 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1212 23:30:48.702636  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702640  160181 command_runner.go:130] >       "size": "74749335",
	I1212 23:30:48.702644  160181 command_runner.go:130] >       "uid": null,
	I1212 23:30:48.702650  160181 command_runner.go:130] >       "username": "",
	I1212 23:30:48.702654  160181 command_runner.go:130] >       "spec": null,
	I1212 23:30:48.702661  160181 command_runner.go:130] >       "pinned": false
	I1212 23:30:48.702664  160181 command_runner.go:130] >     },
	I1212 23:30:48.702670  160181 command_runner.go:130] >     {
	I1212 23:30:48.702676  160181 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1212 23:30:48.702683  160181 command_runner.go:130] >       "repoTags": [
	I1212 23:30:48.702690  160181 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1212 23:30:48.702699  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702705  160181 command_runner.go:130] >       "repoDigests": [
	I1212 23:30:48.702759  160181 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1212 23:30:48.702775  160181 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1212 23:30:48.702781  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702788  160181 command_runner.go:130] >       "size": "61551410",
	I1212 23:30:48.702797  160181 command_runner.go:130] >       "uid": {
	I1212 23:30:48.702806  160181 command_runner.go:130] >         "value": "0"
	I1212 23:30:48.702815  160181 command_runner.go:130] >       },
	I1212 23:30:48.702822  160181 command_runner.go:130] >       "username": "",
	I1212 23:30:48.702832  160181 command_runner.go:130] >       "spec": null,
	I1212 23:30:48.702842  160181 command_runner.go:130] >       "pinned": false
	I1212 23:30:48.702850  160181 command_runner.go:130] >     },
	I1212 23:30:48.702859  160181 command_runner.go:130] >     {
	I1212 23:30:48.702872  160181 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1212 23:30:48.702881  160181 command_runner.go:130] >       "repoTags": [
	I1212 23:30:48.702891  160181 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1212 23:30:48.702905  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702915  160181 command_runner.go:130] >       "repoDigests": [
	I1212 23:30:48.702923  160181 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1212 23:30:48.702932  160181 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1212 23:30:48.702936  160181 command_runner.go:130] >       ],
	I1212 23:30:48.702940  160181 command_runner.go:130] >       "size": "750414",
	I1212 23:30:48.702947  160181 command_runner.go:130] >       "uid": {
	I1212 23:30:48.702951  160181 command_runner.go:130] >         "value": "65535"
	I1212 23:30:48.702955  160181 command_runner.go:130] >       },
	I1212 23:30:48.702959  160181 command_runner.go:130] >       "username": "",
	I1212 23:30:48.702963  160181 command_runner.go:130] >       "spec": null,
	I1212 23:30:48.702969  160181 command_runner.go:130] >       "pinned": false
	I1212 23:30:48.702973  160181 command_runner.go:130] >     }
	I1212 23:30:48.702979  160181 command_runner.go:130] >   ]
	I1212 23:30:48.702982  160181 command_runner.go:130] > }
	I1212 23:30:48.703316  160181 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:30:48.703332  160181 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:30:48.703412  160181 ssh_runner.go:195] Run: crio config
	I1212 23:30:48.753840  160181 command_runner.go:130] ! time="2023-12-12 23:30:48.703389861Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 23:30:48.753876  160181 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 23:30:48.760588  160181 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 23:30:48.760628  160181 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 23:30:48.760639  160181 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 23:30:48.760644  160181 command_runner.go:130] > #
	I1212 23:30:48.760661  160181 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 23:30:48.760671  160181 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 23:30:48.760682  160181 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 23:30:48.760695  160181 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 23:30:48.760706  160181 command_runner.go:130] > # reload'.
	I1212 23:30:48.760717  160181 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 23:30:48.760732  160181 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 23:30:48.760745  160181 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 23:30:48.760755  160181 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 23:30:48.760761  160181 command_runner.go:130] > [crio]
	I1212 23:30:48.760772  160181 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 23:30:48.760780  160181 command_runner.go:130] > # containers images, in this directory.
	I1212 23:30:48.760785  160181 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 23:30:48.760798  160181 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 23:30:48.760807  160181 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 23:30:48.760819  160181 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 23:30:48.760833  160181 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 23:30:48.760844  160181 command_runner.go:130] > storage_driver = "overlay"
	I1212 23:30:48.760857  160181 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 23:30:48.760867  160181 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 23:30:48.760874  160181 command_runner.go:130] > storage_option = [
	I1212 23:30:48.760885  160181 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 23:30:48.760889  160181 command_runner.go:130] > ]
	I1212 23:30:48.760895  160181 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 23:30:48.760904  160181 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 23:30:48.760908  160181 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 23:30:48.760916  160181 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 23:30:48.760922  160181 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 23:30:48.760931  160181 command_runner.go:130] > # always happen on a node reboot
	I1212 23:30:48.760938  160181 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 23:30:48.760944  160181 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 23:30:48.760952  160181 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 23:30:48.760963  160181 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 23:30:48.760970  160181 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 23:30:48.760978  160181 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 23:30:48.760988  160181 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 23:30:48.760994  160181 command_runner.go:130] > # internal_wipe = true
	I1212 23:30:48.761007  160181 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 23:30:48.761021  160181 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 23:30:48.761033  160181 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 23:30:48.761045  160181 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 23:30:48.761058  160181 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 23:30:48.761068  160181 command_runner.go:130] > [crio.api]
	I1212 23:30:48.761077  160181 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 23:30:48.761082  160181 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 23:30:48.761089  160181 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 23:30:48.761097  160181 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 23:30:48.761106  160181 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 23:30:48.761112  160181 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 23:30:48.761118  160181 command_runner.go:130] > # stream_port = "0"
	I1212 23:30:48.761124  160181 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 23:30:48.761130  160181 command_runner.go:130] > # stream_enable_tls = false
	I1212 23:30:48.761137  160181 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 23:30:48.761143  160181 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 23:30:48.761150  160181 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 23:30:48.761158  160181 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 23:30:48.761164  160181 command_runner.go:130] > # minutes.
	I1212 23:30:48.761168  160181 command_runner.go:130] > # stream_tls_cert = ""
	I1212 23:30:48.761175  160181 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 23:30:48.761183  160181 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 23:30:48.761188  160181 command_runner.go:130] > # stream_tls_key = ""
	I1212 23:30:48.761196  160181 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 23:30:48.761202  160181 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 23:30:48.761210  160181 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 23:30:48.761217  160181 command_runner.go:130] > # stream_tls_ca = ""
	I1212 23:30:48.761228  160181 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 23:30:48.761235  160181 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 23:30:48.761242  160181 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 23:30:48.761248  160181 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 23:30:48.761269  160181 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 23:30:48.761278  160181 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 23:30:48.761281  160181 command_runner.go:130] > [crio.runtime]
	I1212 23:30:48.761287  160181 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 23:30:48.761294  160181 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 23:30:48.761299  160181 command_runner.go:130] > # "nofile=1024:2048"
	I1212 23:30:48.761305  160181 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 23:30:48.761311  160181 command_runner.go:130] > # default_ulimits = [
	I1212 23:30:48.761315  160181 command_runner.go:130] > # ]
	I1212 23:30:48.761323  160181 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 23:30:48.761328  160181 command_runner.go:130] > # no_pivot = false
	I1212 23:30:48.761336  160181 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 23:30:48.761342  160181 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 23:30:48.761351  160181 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 23:30:48.761357  160181 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 23:30:48.761364  160181 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 23:30:48.761374  160181 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 23:30:48.761381  160181 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 23:30:48.761385  160181 command_runner.go:130] > # Cgroup setting for conmon
	I1212 23:30:48.761393  160181 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 23:30:48.761400  160181 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 23:30:48.761406  160181 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 23:30:48.761414  160181 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 23:30:48.761420  160181 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 23:30:48.761426  160181 command_runner.go:130] > conmon_env = [
	I1212 23:30:48.761432  160181 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 23:30:48.761437  160181 command_runner.go:130] > ]
	I1212 23:30:48.761442  160181 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 23:30:48.761449  160181 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 23:30:48.761455  160181 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 23:30:48.761459  160181 command_runner.go:130] > # default_env = [
	I1212 23:30:48.761468  160181 command_runner.go:130] > # ]
	I1212 23:30:48.761474  160181 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 23:30:48.761480  160181 command_runner.go:130] > # selinux = false
	I1212 23:30:48.761486  160181 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 23:30:48.761502  160181 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 23:30:48.761510  160181 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 23:30:48.761514  160181 command_runner.go:130] > # seccomp_profile = ""
	I1212 23:30:48.761521  160181 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 23:30:48.761530  160181 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 23:30:48.761538  160181 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 23:30:48.761543  160181 command_runner.go:130] > # which might increase security.
	I1212 23:30:48.761547  160181 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 23:30:48.761555  160181 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 23:30:48.761562  160181 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 23:30:48.761570  160181 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 23:30:48.761576  160181 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 23:30:48.761581  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:30:48.761588  160181 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 23:30:48.761595  160181 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 23:30:48.761602  160181 command_runner.go:130] > # the cgroup blockio controller.
	I1212 23:30:48.761606  160181 command_runner.go:130] > # blockio_config_file = ""
	I1212 23:30:48.761612  160181 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 23:30:48.761617  160181 command_runner.go:130] > # irqbalance daemon.
	I1212 23:30:48.761622  160181 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 23:30:48.761630  160181 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 23:30:48.761635  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:30:48.761642  160181 command_runner.go:130] > # rdt_config_file = ""
	I1212 23:30:48.761647  160181 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 23:30:48.761653  160181 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 23:30:48.761659  160181 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 23:30:48.761665  160181 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 23:30:48.761671  160181 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 23:30:48.761679  160181 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 23:30:48.761683  160181 command_runner.go:130] > # will be added.
	I1212 23:30:48.761690  160181 command_runner.go:130] > # default_capabilities = [
	I1212 23:30:48.761693  160181 command_runner.go:130] > # 	"CHOWN",
	I1212 23:30:48.761702  160181 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 23:30:48.761705  160181 command_runner.go:130] > # 	"FSETID",
	I1212 23:30:48.761709  160181 command_runner.go:130] > # 	"FOWNER",
	I1212 23:30:48.761713  160181 command_runner.go:130] > # 	"SETGID",
	I1212 23:30:48.761717  160181 command_runner.go:130] > # 	"SETUID",
	I1212 23:30:48.761720  160181 command_runner.go:130] > # 	"SETPCAP",
	I1212 23:30:48.761725  160181 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 23:30:48.761728  160181 command_runner.go:130] > # 	"KILL",
	I1212 23:30:48.761734  160181 command_runner.go:130] > # ]
	I1212 23:30:48.761740  160181 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 23:30:48.761748  160181 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 23:30:48.761752  160181 command_runner.go:130] > # default_sysctls = [
	I1212 23:30:48.761759  160181 command_runner.go:130] > # ]
	I1212 23:30:48.761763  160181 command_runner.go:130] > # List of devices on the host that a
	I1212 23:30:48.761769  160181 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 23:30:48.761776  160181 command_runner.go:130] > # allowed_devices = [
	I1212 23:30:48.761779  160181 command_runner.go:130] > # 	"/dev/fuse",
	I1212 23:30:48.761784  160181 command_runner.go:130] > # ]
	I1212 23:30:48.761791  160181 command_runner.go:130] > # List of additional devices. specified as
	I1212 23:30:48.761798  160181 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 23:30:48.761806  160181 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 23:30:48.761832  160181 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 23:30:48.761838  160181 command_runner.go:130] > # additional_devices = [
	I1212 23:30:48.761841  160181 command_runner.go:130] > # ]
	I1212 23:30:48.761847  160181 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 23:30:48.761851  160181 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 23:30:48.761855  160181 command_runner.go:130] > # 	"/etc/cdi",
	I1212 23:30:48.761861  160181 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 23:30:48.761865  160181 command_runner.go:130] > # ]
	I1212 23:30:48.761871  160181 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 23:30:48.761878  160181 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 23:30:48.761882  160181 command_runner.go:130] > # Defaults to false.
	I1212 23:30:48.761888  160181 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 23:30:48.761896  160181 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 23:30:48.761902  160181 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 23:30:48.761908  160181 command_runner.go:130] > # hooks_dir = [
	I1212 23:30:48.761915  160181 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 23:30:48.761921  160181 command_runner.go:130] > # ]
	I1212 23:30:48.761927  160181 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 23:30:48.761935  160181 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 23:30:48.761940  160181 command_runner.go:130] > # its default mounts from the following two files:
	I1212 23:30:48.761945  160181 command_runner.go:130] > #
	I1212 23:30:48.761952  160181 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 23:30:48.761960  160181 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 23:30:48.761966  160181 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 23:30:48.761970  160181 command_runner.go:130] > #
	I1212 23:30:48.761976  160181 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 23:30:48.761984  160181 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 23:30:48.761990  160181 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 23:30:48.761998  160181 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 23:30:48.762001  160181 command_runner.go:130] > #
	I1212 23:30:48.762008  160181 command_runner.go:130] > # default_mounts_file = ""
	I1212 23:30:48.762013  160181 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 23:30:48.762021  160181 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 23:30:48.762028  160181 command_runner.go:130] > pids_limit = 1024
	I1212 23:30:48.762037  160181 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 23:30:48.762046  160181 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 23:30:48.762052  160181 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 23:30:48.762059  160181 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 23:30:48.762066  160181 command_runner.go:130] > # log_size_max = -1
	I1212 23:30:48.762072  160181 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 23:30:48.762078  160181 command_runner.go:130] > # log_to_journald = false
	I1212 23:30:48.762084  160181 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 23:30:48.762091  160181 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 23:30:48.762097  160181 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 23:30:48.762104  160181 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 23:30:48.762109  160181 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 23:30:48.762115  160181 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 23:30:48.762121  160181 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 23:30:48.762127  160181 command_runner.go:130] > # read_only = false
	I1212 23:30:48.762133  160181 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 23:30:48.762141  160181 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 23:30:48.762152  160181 command_runner.go:130] > # live configuration reload.
	I1212 23:30:48.762158  160181 command_runner.go:130] > # log_level = "info"
	I1212 23:30:48.762164  160181 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 23:30:48.762171  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:30:48.762175  160181 command_runner.go:130] > # log_filter = ""
	I1212 23:30:48.762182  160181 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 23:30:48.762188  160181 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 23:30:48.762194  160181 command_runner.go:130] > # separated by comma.
	I1212 23:30:48.762198  160181 command_runner.go:130] > # uid_mappings = ""
	I1212 23:30:48.762204  160181 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 23:30:48.762212  160181 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 23:30:48.762216  160181 command_runner.go:130] > # separated by comma.
	I1212 23:30:48.762220  160181 command_runner.go:130] > # gid_mappings = ""
	I1212 23:30:48.762228  160181 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 23:30:48.762234  160181 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 23:30:48.762240  160181 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 23:30:48.762245  160181 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 23:30:48.762252  160181 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 23:30:48.762262  160181 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 23:30:48.762271  160181 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 23:30:48.762276  160181 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 23:30:48.762282  160181 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 23:30:48.762290  160181 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 23:30:48.762297  160181 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 23:30:48.762301  160181 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 23:30:48.762309  160181 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 23:30:48.762315  160181 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 23:30:48.762325  160181 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 23:30:48.762330  160181 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 23:30:48.762339  160181 command_runner.go:130] > drop_infra_ctr = false
	I1212 23:30:48.762345  160181 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 23:30:48.762353  160181 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 23:30:48.762360  160181 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 23:30:48.762366  160181 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 23:30:48.762372  160181 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 23:30:48.762379  160181 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 23:30:48.762386  160181 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 23:30:48.762395  160181 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 23:30:48.762400  160181 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 23:30:48.762408  160181 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 23:30:48.762414  160181 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 23:30:48.762422  160181 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 23:30:48.762427  160181 command_runner.go:130] > # default_runtime = "runc"
	I1212 23:30:48.762434  160181 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 23:30:48.762441  160181 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 23:30:48.762452  160181 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 23:30:48.762458  160181 command_runner.go:130] > # creation as a file is not desired either.
	I1212 23:30:48.762466  160181 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 23:30:48.762473  160181 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 23:30:48.762477  160181 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 23:30:48.762483  160181 command_runner.go:130] > # ]
	I1212 23:30:48.762489  160181 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 23:30:48.762501  160181 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 23:30:48.762509  160181 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 23:30:48.762517  160181 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 23:30:48.762523  160181 command_runner.go:130] > #
	I1212 23:30:48.762527  160181 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 23:30:48.762534  160181 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 23:30:48.762539  160181 command_runner.go:130] > #  runtime_type = "oci"
	I1212 23:30:48.762544  160181 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 23:30:48.762549  160181 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 23:30:48.762553  160181 command_runner.go:130] > #  allowed_annotations = []
	I1212 23:30:48.762559  160181 command_runner.go:130] > # Where:
	I1212 23:30:48.762565  160181 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 23:30:48.762573  160181 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 23:30:48.762579  160181 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 23:30:48.762588  160181 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 23:30:48.762592  160181 command_runner.go:130] > #   in $PATH.
	I1212 23:30:48.762601  160181 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 23:30:48.762606  160181 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 23:30:48.762614  160181 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 23:30:48.762617  160181 command_runner.go:130] > #   state.
	I1212 23:30:48.762628  160181 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 23:30:48.762637  160181 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 23:30:48.762643  160181 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 23:30:48.762651  160181 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 23:30:48.762658  160181 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 23:30:48.762668  160181 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 23:30:48.762673  160181 command_runner.go:130] > #   The currently recognized values are:
	I1212 23:30:48.762682  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 23:30:48.762690  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 23:30:48.762698  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 23:30:48.762704  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 23:30:48.762713  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 23:30:48.762720  160181 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 23:30:48.762728  160181 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 23:30:48.762735  160181 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 23:30:48.762742  160181 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 23:30:48.762746  160181 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 23:30:48.762753  160181 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 23:30:48.762758  160181 command_runner.go:130] > runtime_type = "oci"
	I1212 23:30:48.762765  160181 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 23:30:48.762769  160181 command_runner.go:130] > runtime_config_path = ""
	I1212 23:30:48.762774  160181 command_runner.go:130] > monitor_path = ""
	I1212 23:30:48.762780  160181 command_runner.go:130] > monitor_cgroup = ""
	I1212 23:30:48.762784  160181 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 23:30:48.762790  160181 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 23:30:48.762795  160181 command_runner.go:130] > # running containers
	I1212 23:30:48.762799  160181 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 23:30:48.762807  160181 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 23:30:48.762870  160181 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 23:30:48.762883  160181 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 23:30:48.762888  160181 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 23:30:48.762892  160181 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 23:30:48.762897  160181 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 23:30:48.762901  160181 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 23:30:48.762908  160181 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 23:30:48.762912  160181 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 23:30:48.762921  160181 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 23:30:48.762930  160181 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 23:30:48.762936  160181 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 23:30:48.762943  160181 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 23:30:48.762953  160181 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 23:30:48.762961  160181 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 23:30:48.762970  160181 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 23:30:48.762980  160181 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 23:30:48.762986  160181 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 23:30:48.762995  160181 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 23:30:48.762999  160181 command_runner.go:130] > # Example:
	I1212 23:30:48.763006  160181 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 23:30:48.763011  160181 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 23:30:48.763018  160181 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 23:30:48.763023  160181 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 23:30:48.763029  160181 command_runner.go:130] > # cpuset = 0
	I1212 23:30:48.763033  160181 command_runner.go:130] > # cpushares = "0-1"
	I1212 23:30:48.763036  160181 command_runner.go:130] > # Where:
	I1212 23:30:48.763045  160181 command_runner.go:130] > # The workload name is workload-type.
	I1212 23:30:48.763055  160181 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 23:30:48.763062  160181 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 23:30:48.763070  160181 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 23:30:48.763078  160181 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 23:30:48.763086  160181 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 23:30:48.763089  160181 command_runner.go:130] > # 
	I1212 23:30:48.763097  160181 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 23:30:48.763103  160181 command_runner.go:130] > #
	I1212 23:30:48.763109  160181 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 23:30:48.763117  160181 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 23:30:48.763123  160181 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 23:30:48.763131  160181 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 23:30:48.763137  160181 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 23:30:48.763144  160181 command_runner.go:130] > [crio.image]
	I1212 23:30:48.763150  160181 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 23:30:48.763156  160181 command_runner.go:130] > # default_transport = "docker://"
	I1212 23:30:48.763162  160181 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 23:30:48.763174  160181 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 23:30:48.763178  160181 command_runner.go:130] > # global_auth_file = ""
	I1212 23:30:48.763186  160181 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 23:30:48.763191  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:30:48.763198  160181 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 23:30:48.763205  160181 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 23:30:48.763212  160181 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 23:30:48.763217  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:30:48.763224  160181 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 23:30:48.763229  160181 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 23:30:48.763240  160181 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 23:30:48.763247  160181 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 23:30:48.763254  160181 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 23:30:48.763261  160181 command_runner.go:130] > # pause_command = "/pause"
	I1212 23:30:48.763267  160181 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 23:30:48.763275  160181 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 23:30:48.763281  160181 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 23:30:48.763287  160181 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 23:30:48.763294  160181 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 23:30:48.763297  160181 command_runner.go:130] > # signature_policy = ""
	I1212 23:30:48.763303  160181 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 23:30:48.763309  160181 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 23:30:48.763313  160181 command_runner.go:130] > # changing them here.
	I1212 23:30:48.763316  160181 command_runner.go:130] > # insecure_registries = [
	I1212 23:30:48.763320  160181 command_runner.go:130] > # ]
	I1212 23:30:48.763329  160181 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 23:30:48.763334  160181 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 23:30:48.763337  160181 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 23:30:48.763342  160181 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 23:30:48.763347  160181 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 23:30:48.763355  160181 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 23:30:48.763360  160181 command_runner.go:130] > # CNI plugins.
	I1212 23:30:48.763366  160181 command_runner.go:130] > [crio.network]
	I1212 23:30:48.763371  160181 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 23:30:48.763379  160181 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 23:30:48.763383  160181 command_runner.go:130] > # cni_default_network = ""
	I1212 23:30:48.763392  160181 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 23:30:48.763399  160181 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 23:30:48.763412  160181 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 23:30:48.763418  160181 command_runner.go:130] > # plugin_dirs = [
	I1212 23:30:48.763422  160181 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 23:30:48.763425  160181 command_runner.go:130] > # ]
	I1212 23:30:48.763431  160181 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 23:30:48.763437  160181 command_runner.go:130] > [crio.metrics]
	I1212 23:30:48.763442  160181 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 23:30:48.763448  160181 command_runner.go:130] > enable_metrics = true
	I1212 23:30:48.763453  160181 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 23:30:48.763458  160181 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 23:30:48.763464  160181 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 23:30:48.763472  160181 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 23:30:48.763478  160181 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 23:30:48.763485  160181 command_runner.go:130] > # metrics_collectors = [
	I1212 23:30:48.763489  160181 command_runner.go:130] > # 	"operations",
	I1212 23:30:48.763500  160181 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 23:30:48.763507  160181 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 23:30:48.763513  160181 command_runner.go:130] > # 	"operations_errors",
	I1212 23:30:48.763517  160181 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 23:30:48.763524  160181 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 23:30:48.763528  160181 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 23:30:48.763534  160181 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 23:30:48.763539  160181 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 23:30:48.763548  160181 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 23:30:48.763554  160181 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 23:30:48.763561  160181 command_runner.go:130] > # 	"containers_oom_total",
	I1212 23:30:48.763565  160181 command_runner.go:130] > # 	"containers_oom",
	I1212 23:30:48.763571  160181 command_runner.go:130] > # 	"processes_defunct",
	I1212 23:30:48.763575  160181 command_runner.go:130] > # 	"operations_total",
	I1212 23:30:48.763580  160181 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 23:30:48.763585  160181 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 23:30:48.763589  160181 command_runner.go:130] > # 	"operations_errors_total",
	I1212 23:30:48.763596  160181 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 23:30:48.763600  160181 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 23:30:48.763609  160181 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 23:30:48.763616  160181 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 23:30:48.763620  160181 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 23:30:48.763627  160181 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 23:30:48.763630  160181 command_runner.go:130] > # ]
	I1212 23:30:48.763638  160181 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 23:30:48.763642  160181 command_runner.go:130] > # metrics_port = 9090
	I1212 23:30:48.763647  160181 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 23:30:48.763652  160181 command_runner.go:130] > # metrics_socket = ""
	I1212 23:30:48.763657  160181 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 23:30:48.763665  160181 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 23:30:48.763671  160181 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 23:30:48.763676  160181 command_runner.go:130] > # certificate on any modification event.
	I1212 23:30:48.763682  160181 command_runner.go:130] > # metrics_cert = ""
	I1212 23:30:48.763687  160181 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 23:30:48.763694  160181 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 23:30:48.763698  160181 command_runner.go:130] > # metrics_key = ""
	I1212 23:30:48.763707  160181 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 23:30:48.763713  160181 command_runner.go:130] > [crio.tracing]
	I1212 23:30:48.763720  160181 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 23:30:48.763725  160181 command_runner.go:130] > # enable_tracing = false
	I1212 23:30:48.763733  160181 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 23:30:48.763738  160181 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 23:30:48.763745  160181 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 23:30:48.763750  160181 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 23:30:48.763758  160181 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 23:30:48.763762  160181 command_runner.go:130] > [crio.stats]
	I1212 23:30:48.763772  160181 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 23:30:48.763777  160181 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 23:30:48.763782  160181 command_runner.go:130] > # stats_collection_period = 0
	I1212 23:30:48.763856  160181 cni.go:84] Creating CNI manager for ""
	I1212 23:30:48.763867  160181 cni.go:136] 3 nodes found, recommending kindnet
	I1212 23:30:48.763884  160181 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:30:48.763903  160181 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-510563 NodeName:multinode-510563 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:30:48.764034  160181 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-510563"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:30:48.764112  160181 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-510563 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:30:48.764166  160181 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:30:48.773126  160181 command_runner.go:130] > kubeadm
	I1212 23:30:48.773150  160181 command_runner.go:130] > kubectl
	I1212 23:30:48.773157  160181 command_runner.go:130] > kubelet
	I1212 23:30:48.773308  160181 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:30:48.773394  160181 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:30:48.782518  160181 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1212 23:30:48.799974  160181 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:30:48.816862  160181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1212 23:30:48.833840  160181 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I1212 23:30:48.837856  160181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:30:48.850884  160181 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563 for IP: 192.168.39.38
	I1212 23:30:48.850926  160181 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:30:48.851093  160181 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1212 23:30:48.851134  160181 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1212 23:30:48.851227  160181 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key
	I1212 23:30:48.851307  160181 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.key.383c1efe
	I1212 23:30:48.851346  160181 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.key
	I1212 23:30:48.851354  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 23:30:48.851366  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 23:30:48.851378  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 23:30:48.851398  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 23:30:48.851424  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:30:48.851437  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:30:48.851451  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:30:48.851461  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:30:48.851516  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1212 23:30:48.851546  160181 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1212 23:30:48.851553  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:30:48.851573  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:30:48.851598  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:30:48.851620  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1212 23:30:48.851664  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:30:48.851688  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem -> /usr/share/ca-certificates/143541.pem
	I1212 23:30:48.851701  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> /usr/share/ca-certificates/1435412.pem
	I1212 23:30:48.851713  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:30:48.852319  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:30:48.881800  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:30:48.905985  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:30:48.928798  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:30:48.951902  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:30:48.975527  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:30:48.999366  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:30:49.022359  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:30:49.046967  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1212 23:30:49.070019  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1212 23:30:49.092716  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:30:49.114935  160181 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:30:49.131436  160181 ssh_runner.go:195] Run: openssl version
	I1212 23:30:49.136750  160181 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 23:30:49.137093  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1212 23:30:49.147988  160181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1212 23:30:49.152892  160181 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1212 23:30:49.152927  160181 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1212 23:30:49.152966  160181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1212 23:30:49.158303  160181 command_runner.go:130] > 51391683
	I1212 23:30:49.158676  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1212 23:30:49.169867  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1212 23:30:49.181174  160181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1212 23:30:49.185933  160181 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1212 23:30:49.185993  160181 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1212 23:30:49.186051  160181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1212 23:30:49.191838  160181 command_runner.go:130] > 3ec20f2e
	I1212 23:30:49.191965  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:30:49.203469  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:30:49.214734  160181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:30:49.219940  160181 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:30:49.219984  160181 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:30:49.220035  160181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:30:49.225843  160181 command_runner.go:130] > b5213941
	I1212 23:30:49.225933  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:30:49.237879  160181 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:30:49.242613  160181 command_runner.go:130] > ca.crt
	I1212 23:30:49.242634  160181 command_runner.go:130] > ca.key
	I1212 23:30:49.242639  160181 command_runner.go:130] > healthcheck-client.crt
	I1212 23:30:49.242643  160181 command_runner.go:130] > healthcheck-client.key
	I1212 23:30:49.242648  160181 command_runner.go:130] > peer.crt
	I1212 23:30:49.242652  160181 command_runner.go:130] > peer.key
	I1212 23:30:49.242655  160181 command_runner.go:130] > server.crt
	I1212 23:30:49.242659  160181 command_runner.go:130] > server.key
	I1212 23:30:49.242709  160181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:30:49.248591  160181 command_runner.go:130] > Certificate will not expire
	I1212 23:30:49.248808  160181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:30:49.254545  160181 command_runner.go:130] > Certificate will not expire
	I1212 23:30:49.254736  160181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:30:49.260498  160181 command_runner.go:130] > Certificate will not expire
	I1212 23:30:49.260773  160181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:30:49.266915  160181 command_runner.go:130] > Certificate will not expire
	I1212 23:30:49.267011  160181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:30:49.272765  160181 command_runner.go:130] > Certificate will not expire
	I1212 23:30:49.273012  160181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:30:49.278941  160181 command_runner.go:130] > Certificate will not expire
	I1212 23:30:49.279139  160181 kubeadm.go:404] StartCluster: {Name:multinode-510563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.133 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:30:49.279278  160181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:30:49.279352  160181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:30:49.321700  160181 cri.go:89] found id: ""
	I1212 23:30:49.321772  160181 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:30:49.333151  160181 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 23:30:49.333175  160181 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 23:30:49.333184  160181 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 23:30:49.333189  160181 command_runner.go:130] > member
	I1212 23:30:49.333235  160181 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:30:49.333248  160181 kubeadm.go:636] restartCluster start
	I1212 23:30:49.333306  160181 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:30:49.343263  160181 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:49.343853  160181 kubeconfig.go:92] found "multinode-510563" server: "https://192.168.39.38:8443"
	I1212 23:30:49.344341  160181 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:30:49.344604  160181 kapi.go:59] client config for multinode-510563: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:30:49.345211  160181 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 23:30:49.345446  160181 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:30:49.356024  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:49.356101  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:49.368615  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:49.368639  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:49.368683  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:49.380532  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:49.881314  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:50.232652  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:50.246814  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:50.381134  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:50.381213  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:50.393913  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:50.881522  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:50.881607  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:50.897127  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:51.380664  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:51.380762  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:51.393948  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:51.881595  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:51.881692  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:51.894038  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:52.380640  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:52.380731  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:52.393586  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:52.881275  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:52.881373  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:52.894499  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:53.381017  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:53.381119  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:53.393488  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:53.881007  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:53.881113  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:53.892978  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:54.381545  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:54.381650  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:54.394270  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:54.881177  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:54.881247  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:54.894628  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:55.381292  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:55.381393  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:55.394054  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:55.881616  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:55.881692  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:55.894457  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:56.381050  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:56.381154  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:56.394042  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:56.881656  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:56.881775  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:56.894827  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:57.381413  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:57.381518  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:57.394097  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:57.881621  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:57.881735  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:57.894229  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:58.380736  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:58.380843  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:58.394597  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:58.881129  160181 api_server.go:166] Checking apiserver status ...
	I1212 23:30:58.881233  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:30:58.895291  160181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:30:59.356056  160181 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:30:59.356099  160181 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:30:59.356113  160181 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:30:59.356184  160181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:30:59.400050  160181 cri.go:89] found id: ""
	I1212 23:30:59.400126  160181 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:30:59.417608  160181 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:30:59.427475  160181 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 23:30:59.427494  160181 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 23:30:59.427501  160181 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 23:30:59.427509  160181 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:30:59.427542  160181 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:30:59.427589  160181 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:30:59.437464  160181 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:30:59.437481  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:30:59.554005  160181 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:30:59.555253  160181 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 23:30:59.556525  160181 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 23:30:59.557903  160181 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 23:30:59.559281  160181 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1212 23:30:59.559863  160181 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1212 23:30:59.560988  160181 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1212 23:30:59.561503  160181 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1212 23:30:59.561955  160181 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1212 23:30:59.562569  160181 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 23:30:59.562980  160181 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 23:30:59.563742  160181 command_runner.go:130] > [certs] Using the existing "sa" key
	I1212 23:30:59.564967  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:30:59.616704  160181 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:30:59.820223  160181 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:30:59.960612  160181 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:31:00.046222  160181 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:31:00.103759  160181 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:31:00.106985  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:31:00.316900  160181 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:31:00.316934  160181 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:31:00.316948  160181 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 23:31:00.316983  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:31:00.380516  160181 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:31:00.380548  160181 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:31:00.383262  160181 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:31:00.384982  160181 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:31:00.388806  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:31:00.480143  160181 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:31:00.491382  160181 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:31:00.491460  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:31:00.508836  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:31:01.023848  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:31:01.523692  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:31:02.023618  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:31:02.524070  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:31:03.023660  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:31:03.047784  160181 command_runner.go:130] > 1066
	I1212 23:31:03.052058  160181 api_server.go:72] duration metric: took 2.560673727s to wait for apiserver process to appear ...
	I1212 23:31:03.052104  160181 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:31:03.052127  160181 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I1212 23:31:06.984327  160181 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:31:06.984365  160181 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:31:06.984379  160181 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I1212 23:31:07.109770  160181 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:31:07.109826  160181 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:31:07.610546  160181 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I1212 23:31:07.617664  160181 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:31:07.617702  160181 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:31:08.110848  160181 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I1212 23:31:08.119656  160181 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:31:08.119696  160181 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:31:08.610237  160181 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I1212 23:31:08.617058  160181 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I1212 23:31:08.617152  160181 round_trippers.go:463] GET https://192.168.39.38:8443/version
	I1212 23:31:08.617164  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:08.617176  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:08.617189  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:08.634878  160181 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1212 23:31:08.634901  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:08.634908  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:08.634914  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:08.634919  160181 round_trippers.go:580]     Content-Length: 264
	I1212 23:31:08.634924  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:08 GMT
	I1212 23:31:08.634929  160181 round_trippers.go:580]     Audit-Id: 552cc578-f6db-4f2e-b659-f500f245b5f1
	I1212 23:31:08.634934  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:08.634939  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:08.634971  160181 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 23:31:08.635050  160181 api_server.go:141] control plane version: v1.28.4
	I1212 23:31:08.635065  160181 api_server.go:131] duration metric: took 5.58295418s to wait for apiserver health ...
	I1212 23:31:08.635076  160181 cni.go:84] Creating CNI manager for ""
	I1212 23:31:08.635081  160181 cni.go:136] 3 nodes found, recommending kindnet
	I1212 23:31:08.637301  160181 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 23:31:08.639086  160181 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 23:31:08.655079  160181 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 23:31:08.655114  160181 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 23:31:08.655142  160181 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 23:31:08.655152  160181 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:31:08.655158  160181 command_runner.go:130] > Access: 2023-12-12 23:30:35.501189212 +0000
	I1212 23:31:08.655165  160181 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 23:31:08.655178  160181 command_runner.go:130] > Change: 2023-12-12 23:30:33.624189212 +0000
	I1212 23:31:08.655183  160181 command_runner.go:130] >  Birth: -
	I1212 23:31:08.655244  160181 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 23:31:08.655258  160181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 23:31:08.712057  160181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 23:31:09.860629  160181 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 23:31:09.860663  160181 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 23:31:09.860672  160181 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 23:31:09.860707  160181 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 23:31:09.860754  160181 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.148646379s)
	I1212 23:31:09.860790  160181 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:31:09.860894  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:31:09.860904  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:09.860944  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:09.860958  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:09.866264  160181 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:31:09.866285  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:09.866292  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:09.866297  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:09.866303  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:09.866308  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:09.866313  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:09 GMT
	I1212 23:31:09.866319  160181 round_trippers.go:580]     Audit-Id: 067d573e-6f01-4a0f-a09b-a1f0cc1cf3f2
	I1212 23:31:09.867632  160181 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"855"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"815","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82606 chars]
	I1212 23:31:09.872899  160181 system_pods.go:59] 12 kube-system pods found
	I1212 23:31:09.872939  160181 system_pods.go:61] "coredns-5dd5756b68-zcxks" [503de693-19d6-45c5-97c6-3b8e5657bfee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 23:31:09.872948  160181 system_pods.go:61] "etcd-multinode-510563" [2748a67b-24f2-4b90-bf95-eb56755a397a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 23:31:09.872957  160181 system_pods.go:61] "kindnet-5v7sf" [ed1b67f7-1607-4266-9a99-dd7e084a0abc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 23:31:09.872966  160181 system_pods.go:61] "kindnet-lqdxw" [56d8e0e6-679d-47bd-af1f-c1b8d8018eb5] Running
	I1212 23:31:09.872973  160181 system_pods.go:61] "kindnet-v4js8" [cfe24f85-472c-4ef2-9a48-9e3647cc8feb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 23:31:09.872979  160181 system_pods.go:61] "kube-apiserver-multinode-510563" [e8a8ed00-d13d-44f0-b7d6-b42bf1342d95] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 23:31:09.872985  160181 system_pods.go:61] "kube-controller-manager-multinode-510563" [efdc7f68-25d6-4f6a-ab8f-1dec43407375] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 23:31:09.872991  160181 system_pods.go:61] "kube-proxy-fbk65" [478c2dce-ac51-47ac-9d34-20dc7c331056] Running
	I1212 23:31:09.872998  160181 system_pods.go:61] "kube-proxy-hspw8" [a2255be6-8705-40cd-8f35-a3e82906190c] Running
	I1212 23:31:09.873002  160181 system_pods.go:61] "kube-proxy-msx8s" [f41b9a6d-8132-45a6-9847-5a762664b008] Running
	I1212 23:31:09.873010  160181 system_pods.go:61] "kube-scheduler-multinode-510563" [044da73c-9466-4a43-b283-5f4b9cc04df9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 23:31:09.873014  160181 system_pods.go:61] "storage-provisioner" [cb4f186a-9bb9-488f-8a74-6e01f352fc05] Running
	I1212 23:31:09.873021  160181 system_pods.go:74] duration metric: took 12.219324ms to wait for pod list to return data ...
	I1212 23:31:09.873032  160181 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:31:09.873089  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes
	I1212 23:31:09.873096  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:09.873103  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:09.873109  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:09.876337  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:09.876357  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:09.876365  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:09 GMT
	I1212 23:31:09.876374  160181 round_trippers.go:580]     Audit-Id: 901f716d-b532-417e-bfe5-c78976cc0cea
	I1212 23:31:09.876386  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:09.876394  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:09.876403  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:09.876416  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:09.877093  160181 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"855"},"items":[{"metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"769","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16355 chars]
	I1212 23:31:09.877870  160181 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:31:09.877899  160181 node_conditions.go:123] node cpu capacity is 2
	I1212 23:31:09.877910  160181 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:31:09.877914  160181 node_conditions.go:123] node cpu capacity is 2
	I1212 23:31:09.877917  160181 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:31:09.877921  160181 node_conditions.go:123] node cpu capacity is 2
	I1212 23:31:09.877925  160181 node_conditions.go:105] duration metric: took 4.888499ms to run NodePressure ...
	I1212 23:31:09.877944  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:31:10.050789  160181 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 23:31:10.109059  160181 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 23:31:10.110741  160181 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:31:10.110865  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1212 23:31:10.110877  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:10.110885  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:10.110892  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:10.116274  160181 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:31:10.116298  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:10.116308  160181 round_trippers.go:580]     Audit-Id: ce5e33e0-8d9a-4bac-97a7-2e1ae24eb1c2
	I1212 23:31:10.116316  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:10.116324  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:10.116332  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:10.116340  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:10.116349  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:10 GMT
	I1212 23:31:10.118275  160181 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"857"},"items":[{"metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"809","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 28859 chars]
	I1212 23:31:10.119705  160181 kubeadm.go:787] kubelet initialised
	I1212 23:31:10.119731  160181 kubeadm.go:788] duration metric: took 8.969923ms waiting for restarted kubelet to initialise ...
	I1212 23:31:10.119741  160181 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:31:10.119824  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:31:10.119841  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:10.119852  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:10.119862  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:10.131156  160181 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1212 23:31:10.131184  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:10.131201  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:10.131207  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:10.131212  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:10 GMT
	I1212 23:31:10.131217  160181 round_trippers.go:580]     Audit-Id: a5ccfc0a-773f-4cee-b46a-ab742a511aaf
	I1212 23:31:10.131222  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:10.131227  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:10.132461  160181 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"857"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"815","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82606 chars]
	I1212 23:31:10.134885  160181 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:10.134973  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:31:10.134983  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:10.134993  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:10.135001  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:10.137363  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:10.137383  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:10.137392  160181 round_trippers.go:580]     Audit-Id: b8b98fa4-1eb6-4de8-bb13-fcd280072c9a
	I1212 23:31:10.137400  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:10.137409  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:10.137416  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:10.137424  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:10.137432  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:10 GMT
	I1212 23:31:10.137683  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"815","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 23:31:10.138143  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:10.138159  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:10.138169  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:10.138177  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:10.140392  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:10.140445  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:10.140462  160181 round_trippers.go:580]     Audit-Id: 3151e2e7-ccd5-4cec-b3a9-80d0f0c78c7c
	I1212 23:31:10.140475  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:10.140487  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:10.140499  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:10.140511  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:10.140523  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:10 GMT
	I1212 23:31:10.140829  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"769","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 23:31:10.141127  160181 pod_ready.go:97] node "multinode-510563" hosting pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-510563" has status "Ready":"False"
	I1212 23:31:10.141146  160181 pod_ready.go:81] duration metric: took 6.239703ms waiting for pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace to be "Ready" ...
	E1212 23:31:10.141158  160181 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-510563" hosting pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-510563" has status "Ready":"False"
	I1212 23:31:10.141173  160181 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:10.141245  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:31:10.141255  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:10.141265  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:10.141275  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:10.144828  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:10.144844  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:10.144850  160181 round_trippers.go:580]     Audit-Id: 6b21f2d4-fe49-4d8e-a90e-ee242c89dda3
	I1212 23:31:10.144858  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:10.144867  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:10.144875  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:10.144887  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:10.144893  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:10 GMT
	I1212 23:31:10.145088  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"809","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1212 23:31:10.145540  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:10.145556  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:10.145564  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:10.145576  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:10.147405  160181 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:31:10.147426  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:10.147432  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:10.147437  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:10 GMT
	I1212 23:31:10.147442  160181 round_trippers.go:580]     Audit-Id: d56885af-60dd-4d0d-9a0f-bd643cbc4892
	I1212 23:31:10.147447  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:10.147452  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:10.147457  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:10.147637  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"769","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 23:31:10.147985  160181 pod_ready.go:97] node "multinode-510563" hosting pod "etcd-multinode-510563" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-510563" has status "Ready":"False"
	I1212 23:31:10.148014  160181 pod_ready.go:81] duration metric: took 6.826075ms waiting for pod "etcd-multinode-510563" in "kube-system" namespace to be "Ready" ...
	E1212 23:31:10.148026  160181 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-510563" hosting pod "etcd-multinode-510563" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-510563" has status "Ready":"False"
	I1212 23:31:10.148043  160181 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:10.148125  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-510563
	I1212 23:31:10.148138  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:10.148147  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:10.148153  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:10.150079  160181 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:31:10.150097  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:10.150112  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:10.150121  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:10.150133  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:10 GMT
	I1212 23:31:10.150141  160181 round_trippers.go:580]     Audit-Id: 36766590-d028-430a-85ea-77b0224354a2
	I1212 23:31:10.150158  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:10.150166  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:10.150517  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-510563","namespace":"kube-system","uid":"e8a8ed00-d13d-44f0-b7d6-b42bf1342d95","resourceVersion":"813","creationTimestamp":"2023-12-12T23:20:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.38:8443","kubernetes.io/config.hash":"4b970951c1b4ca2bc525afa7c2eb2fef","kubernetes.io/config.mirror":"4b970951c1b4ca2bc525afa7c2eb2fef","kubernetes.io/config.seen":"2023-12-12T23:20:27.932579600Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1212 23:31:10.150913  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:10.150928  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:10.150938  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:10.150946  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:10.153076  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:10.153096  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:10.153105  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:10.153111  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:10.153118  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:10.153126  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:10.153134  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:10 GMT
	I1212 23:31:10.153150  160181 round_trippers.go:580]     Audit-Id: d143e422-7cfb-4904-8609-ab8b9b3265c0
	I1212 23:31:10.153357  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"769","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 23:31:10.153750  160181 pod_ready.go:97] node "multinode-510563" hosting pod "kube-apiserver-multinode-510563" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-510563" has status "Ready":"False"
	I1212 23:31:10.153777  160181 pod_ready.go:81] duration metric: took 5.725855ms waiting for pod "kube-apiserver-multinode-510563" in "kube-system" namespace to be "Ready" ...
	E1212 23:31:10.153789  160181 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-510563" hosting pod "kube-apiserver-multinode-510563" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-510563" has status "Ready":"False"
	I1212 23:31:10.153805  160181 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:10.153864  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-510563
	I1212 23:31:10.153875  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:10.153885  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:10.153898  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:10.156048  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:10.156063  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:10.156069  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:10.156074  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:10.156078  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:10 GMT
	I1212 23:31:10.156083  160181 round_trippers.go:580]     Audit-Id: 004ea726-0d78-44db-8c60-ac3fbee627f7
	I1212 23:31:10.156088  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:10.156098  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:10.156301  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-510563","namespace":"kube-system","uid":"efdc7f68-25d6-4f6a-ab8f-1dec43407375","resourceVersion":"814","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"50f588add554ab298cca0792048dbecc","kubernetes.io/config.mirror":"50f588add554ab298cca0792048dbecc","kubernetes.io/config.seen":"2023-12-12T23:20:36.354954910Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I1212 23:31:10.260968  160181 request.go:629] Waited for 104.203746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:10.261042  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:10.261046  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:10.261059  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:10.261070  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:10.263758  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:10.263776  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:10.263783  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:10 GMT
	I1212 23:31:10.263789  160181 round_trippers.go:580]     Audit-Id: fa410e19-56d7-42c0-ad69-4cf1b4ac9e3f
	I1212 23:31:10.263794  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:10.263799  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:10.263804  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:10.263809  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:10.264136  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"769","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 23:31:10.264484  160181 pod_ready.go:97] node "multinode-510563" hosting pod "kube-controller-manager-multinode-510563" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-510563" has status "Ready":"False"
	I1212 23:31:10.264515  160181 pod_ready.go:81] duration metric: took 110.699423ms waiting for pod "kube-controller-manager-multinode-510563" in "kube-system" namespace to be "Ready" ...
	E1212 23:31:10.264528  160181 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-510563" hosting pod "kube-controller-manager-multinode-510563" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-510563" has status "Ready":"False"
	I1212 23:31:10.264537  160181 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fbk65" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:10.460946  160181 request.go:629] Waited for 196.321218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fbk65
	I1212 23:31:10.461025  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fbk65
	I1212 23:31:10.461034  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:10.461052  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:10.461060  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:10.464112  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:10.464131  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:10.464147  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:10.464156  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:10 GMT
	I1212 23:31:10.464164  160181 round_trippers.go:580]     Audit-Id: f65a8eba-ce14-4ae7-a931-9bd8982880cc
	I1212 23:31:10.464172  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:10.464180  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:10.464188  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:10.464399  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fbk65","generateName":"kube-proxy-","namespace":"kube-system","uid":"478c2dce-ac51-47ac-9d34-20dc7c331056","resourceVersion":"742","creationTimestamp":"2023-12-12T23:22:27Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:22:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1212 23:31:10.661068  160181 request.go:629] Waited for 196.239918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m03
	I1212 23:31:10.661151  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m03
	I1212 23:31:10.661156  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:10.661164  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:10.661170  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:10.663829  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:10.663852  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:10.663861  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:10.663867  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:10.663872  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:10 GMT
	I1212 23:31:10.663877  160181 round_trippers.go:580]     Audit-Id: b6a3aa4c-cde2-45fd-9b6e-0dd3bc569ff0
	I1212 23:31:10.663882  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:10.663887  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:10.664143  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m03","uid":"6de0e5a4-53e7-4397-9be8-0053fa116498","resourceVersion":"776","creationTimestamp":"2023-12-12T23:23:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_23_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:23:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I1212 23:31:10.664414  160181 pod_ready.go:92] pod "kube-proxy-fbk65" in "kube-system" namespace has status "Ready":"True"
	I1212 23:31:10.664444  160181 pod_ready.go:81] duration metric: took 399.880808ms waiting for pod "kube-proxy-fbk65" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:10.664454  160181 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hspw8" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:10.861947  160181 request.go:629] Waited for 197.415992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hspw8
	I1212 23:31:10.862015  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hspw8
	I1212 23:31:10.862027  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:10.862035  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:10.862042  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:10.864839  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:10.864860  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:10.864866  160181 round_trippers.go:580]     Audit-Id: 90a5e04f-ccf1-423c-89d6-9fe9255c1573
	I1212 23:31:10.864874  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:10.864882  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:10.864890  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:10.864899  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:10.864907  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:10 GMT
	I1212 23:31:10.865052  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hspw8","generateName":"kube-proxy-","namespace":"kube-system","uid":"a2255be6-8705-40cd-8f35-a3e82906190c","resourceVersion":"855","creationTimestamp":"2023-12-12T23:20:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1212 23:31:11.061870  160181 request.go:629] Waited for 196.355078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:11.061937  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:11.061942  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:11.061950  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:11.061956  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:11.066679  160181 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:31:11.066695  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:11.066702  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:11.066707  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:11.066712  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:11.066719  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:11.066727  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:11 GMT
	I1212 23:31:11.066735  160181 round_trippers.go:580]     Audit-Id: 05271846-7f08-47a4-a10c-71de790c567a
	I1212 23:31:11.066917  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"769","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 23:31:11.067219  160181 pod_ready.go:97] node "multinode-510563" hosting pod "kube-proxy-hspw8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-510563" has status "Ready":"False"
	I1212 23:31:11.067236  160181 pod_ready.go:81] duration metric: took 402.77667ms waiting for pod "kube-proxy-hspw8" in "kube-system" namespace to be "Ready" ...
	E1212 23:31:11.067245  160181 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-510563" hosting pod "kube-proxy-hspw8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-510563" has status "Ready":"False"
	I1212 23:31:11.067254  160181 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-msx8s" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:11.261646  160181 request.go:629] Waited for 194.312845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msx8s
	I1212 23:31:11.261725  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msx8s
	I1212 23:31:11.261732  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:11.261740  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:11.261749  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:11.264003  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:11.264026  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:11.264036  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:11.264045  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:11 GMT
	I1212 23:31:11.264059  160181 round_trippers.go:580]     Audit-Id: c51cf8e2-da3d-4629-b02a-a23b73e6b174
	I1212 23:31:11.264066  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:11.264079  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:11.264090  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:11.264226  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-msx8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"f41b9a6d-8132-45a6-9847-5a762664b008","resourceVersion":"525","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1212 23:31:11.461083  160181 request.go:629] Waited for 196.293253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:31:11.461166  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:31:11.461188  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:11.461209  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:11.461223  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:11.463774  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:11.463797  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:11.463806  160181 round_trippers.go:580]     Audit-Id: 9c5a2310-545f-45cb-8196-7e3c43af6783
	I1212 23:31:11.463813  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:11.463820  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:11.463827  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:11.463841  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:11.463853  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:11 GMT
	I1212 23:31:11.463995  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"772","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_23_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 4236 chars]
	I1212 23:31:11.464371  160181 pod_ready.go:92] pod "kube-proxy-msx8s" in "kube-system" namespace has status "Ready":"True"
	I1212 23:31:11.464398  160181 pod_ready.go:81] duration metric: took 397.134442ms waiting for pod "kube-proxy-msx8s" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:11.464418  160181 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:11.661874  160181 request.go:629] Waited for 197.356339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-510563
	I1212 23:31:11.661970  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-510563
	I1212 23:31:11.661978  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:11.661991  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:11.662023  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:11.664878  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:11.664903  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:11.664912  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:11 GMT
	I1212 23:31:11.664920  160181 round_trippers.go:580]     Audit-Id: e1c18314-6f14-4f1a-b6ff-8b36adce1ddb
	I1212 23:31:11.664927  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:11.664934  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:11.664940  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:11.664947  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:11.665300  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-510563","namespace":"kube-system","uid":"044da73c-9466-4a43-b283-5f4b9cc04df9","resourceVersion":"812","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fdb335c77d5fb1581ea23fa0adf419e9","kubernetes.io/config.mirror":"fdb335c77d5fb1581ea23fa0adf419e9","kubernetes.io/config.seen":"2023-12-12T23:20:36.354955844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I1212 23:31:11.861894  160181 request.go:629] Waited for 196.217045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:11.861976  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:11.861985  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:11.861999  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:11.862023  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:11.866674  160181 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:31:11.866691  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:11.866698  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:11.866706  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:11.866711  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:11 GMT
	I1212 23:31:11.866717  160181 round_trippers.go:580]     Audit-Id: ca7851af-bf21-4502-ac25-e6daab784638
	I1212 23:31:11.866722  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:11.866727  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:11.866913  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"769","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 23:31:11.867228  160181 pod_ready.go:97] node "multinode-510563" hosting pod "kube-scheduler-multinode-510563" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-510563" has status "Ready":"False"
	I1212 23:31:11.867248  160181 pod_ready.go:81] duration metric: took 402.821978ms waiting for pod "kube-scheduler-multinode-510563" in "kube-system" namespace to be "Ready" ...
	E1212 23:31:11.867257  160181 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-510563" hosting pod "kube-scheduler-multinode-510563" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-510563" has status "Ready":"False"
	I1212 23:31:11.867265  160181 pod_ready.go:38] duration metric: took 1.747516028s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:31:11.867281  160181 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:31:11.883253  160181 command_runner.go:130] > -16
	I1212 23:31:11.883419  160181 ops.go:34] apiserver oom_adj: -16
	I1212 23:31:11.883435  160181 kubeadm.go:640] restartCluster took 22.550178106s
	I1212 23:31:11.883442  160181 kubeadm.go:406] StartCluster complete in 22.60431442s
	I1212 23:31:11.883459  160181 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:31:11.883540  160181 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:31:11.884228  160181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:31:11.884487  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:31:11.884687  160181 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:31:11.887654  160181 out.go:177] * Enabled addons: 
	I1212 23:31:11.884857  160181 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:31:11.885006  160181 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:31:11.889253  160181 addons.go:502] enable addons completed in 4.56758ms: enabled=[]
	I1212 23:31:11.889505  160181 kapi.go:59] client config for multinode-510563: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:31:11.889836  160181 round_trippers.go:463] GET https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:31:11.889850  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:11.889860  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:11.889868  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:11.892554  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:11.892568  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:11.892580  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:11.892588  160181 round_trippers.go:580]     Content-Length: 291
	I1212 23:31:11.892596  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:11 GMT
	I1212 23:31:11.892603  160181 round_trippers.go:580]     Audit-Id: 9ede695a-eb3e-40ff-bc1b-b499808940aa
	I1212 23:31:11.892612  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:11.892620  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:11.892632  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:11.892683  160181 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b98a4ef4-74a2-4d35-a29b-c065c5f3121c","resourceVersion":"856","creationTimestamp":"2023-12-12T23:20:36Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 23:31:11.892853  160181 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-510563" context rescaled to 1 replicas
	I1212 23:31:11.892881  160181 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:31:11.894416  160181 out.go:177] * Verifying Kubernetes components...
	I1212 23:31:11.895817  160181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:31:11.984697  160181 command_runner.go:130] > apiVersion: v1
	I1212 23:31:11.984719  160181 command_runner.go:130] > data:
	I1212 23:31:11.984727  160181 command_runner.go:130] >   Corefile: |
	I1212 23:31:11.984737  160181 command_runner.go:130] >     .:53 {
	I1212 23:31:11.984742  160181 command_runner.go:130] >         log
	I1212 23:31:11.984748  160181 command_runner.go:130] >         errors
	I1212 23:31:11.984754  160181 command_runner.go:130] >         health {
	I1212 23:31:11.984761  160181 command_runner.go:130] >            lameduck 5s
	I1212 23:31:11.984767  160181 command_runner.go:130] >         }
	I1212 23:31:11.984774  160181 command_runner.go:130] >         ready
	I1212 23:31:11.984787  160181 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 23:31:11.984795  160181 command_runner.go:130] >            pods insecure
	I1212 23:31:11.984805  160181 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 23:31:11.984816  160181 command_runner.go:130] >            ttl 30
	I1212 23:31:11.984823  160181 command_runner.go:130] >         }
	I1212 23:31:11.984832  160181 command_runner.go:130] >         prometheus :9153
	I1212 23:31:11.984839  160181 command_runner.go:130] >         hosts {
	I1212 23:31:11.984848  160181 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1212 23:31:11.984856  160181 command_runner.go:130] >            fallthrough
	I1212 23:31:11.984863  160181 command_runner.go:130] >         }
	I1212 23:31:11.984872  160181 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 23:31:11.984887  160181 command_runner.go:130] >            max_concurrent 1000
	I1212 23:31:11.984897  160181 command_runner.go:130] >         }
	I1212 23:31:11.984904  160181 command_runner.go:130] >         cache 30
	I1212 23:31:11.984915  160181 command_runner.go:130] >         loop
	I1212 23:31:11.984922  160181 command_runner.go:130] >         reload
	I1212 23:31:11.984930  160181 command_runner.go:130] >         loadbalance
	I1212 23:31:11.984937  160181 command_runner.go:130] >     }
	I1212 23:31:11.984944  160181 command_runner.go:130] > kind: ConfigMap
	I1212 23:31:11.984951  160181 command_runner.go:130] > metadata:
	I1212 23:31:11.984960  160181 command_runner.go:130] >   creationTimestamp: "2023-12-12T23:20:36Z"
	I1212 23:31:11.984967  160181 command_runner.go:130] >   name: coredns
	I1212 23:31:11.984975  160181 command_runner.go:130] >   namespace: kube-system
	I1212 23:31:11.984982  160181 command_runner.go:130] >   resourceVersion: "400"
	I1212 23:31:11.985003  160181 command_runner.go:130] >   uid: 15ab4162-ef32-4564-b7ab-f6d6948ed723
	I1212 23:31:11.990551  160181 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 23:31:11.990553  160181 node_ready.go:35] waiting up to 6m0s for node "multinode-510563" to be "Ready" ...
	I1212 23:31:12.061930  160181 request.go:629] Waited for 71.240094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:12.061998  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:12.062006  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:12.062022  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:12.062028  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:12.065391  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:12.065416  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:12.065422  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:12.065428  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:12.065433  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:12 GMT
	I1212 23:31:12.065438  160181 round_trippers.go:580]     Audit-Id: 37f3d450-1d6d-4101-bf71-95dc3d90dded
	I1212 23:31:12.065446  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:12.065451  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:12.065614  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"769","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 23:31:12.261318  160181 request.go:629] Waited for 195.372984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:12.261407  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:12.261413  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:12.261421  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:12.261428  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:12.264538  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:12.264557  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:12.264564  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:12.264569  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:12 GMT
	I1212 23:31:12.264574  160181 round_trippers.go:580]     Audit-Id: b06b76e8-b24d-4965-908d-80b2385a042d
	I1212 23:31:12.264587  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:12.264596  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:12.264600  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:12.265001  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"769","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 23:31:12.766194  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:12.766220  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:12.766231  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:12.766240  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:12.769010  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:12.769026  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:12.769036  160181 round_trippers.go:580]     Audit-Id: f9515aad-f9e2-4d21-b0c4-c457804a92bb
	I1212 23:31:12.769045  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:12.769052  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:12.769062  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:12.769070  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:12.769081  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:12 GMT
	I1212 23:31:12.769375  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"769","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1212 23:31:13.266466  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:13.266488  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:13.266496  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:13.266507  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:13.269168  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:13.269188  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:13.269195  160181 round_trippers.go:580]     Audit-Id: e3f7898c-4b92-4922-bdba-23641a20bcef
	I1212 23:31:13.269200  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:13.269205  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:13.269210  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:13.269215  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:13.269220  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:13 GMT
	I1212 23:31:13.269486  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:13.269849  160181 node_ready.go:49] node "multinode-510563" has status "Ready":"True"
	I1212 23:31:13.269868  160181 node_ready.go:38] duration metric: took 1.279287339s waiting for node "multinode-510563" to be "Ready" ...
	I1212 23:31:13.269877  160181 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:31:13.269948  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:31:13.269959  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:13.269967  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:13.269972  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:13.278242  160181 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:31:13.278268  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:13.278275  160181 round_trippers.go:580]     Audit-Id: d3eb3614-6b79-48c7-b67a-58e7490068a9
	I1212 23:31:13.278284  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:13.278293  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:13.278301  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:13.278309  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:13.278320  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:13 GMT
	I1212 23:31:13.281371  160181 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"888"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"815","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82663 chars]
	I1212 23:31:13.283853  160181 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:13.283914  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:31:13.283921  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:13.283929  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:13.283935  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:13.287251  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:13.287271  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:13.287282  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:13.287291  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:13 GMT
	I1212 23:31:13.287299  160181 round_trippers.go:580]     Audit-Id: 675be55c-0dbf-4a92-b2bb-3c982bb7d28b
	I1212 23:31:13.287314  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:13.287326  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:13.287333  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:13.287918  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"815","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 23:31:13.288337  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:13.288352  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:13.288359  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:13.288365  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:13.291445  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:13.291464  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:13.291472  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:13 GMT
	I1212 23:31:13.291480  160181 round_trippers.go:580]     Audit-Id: 09ee0114-e50f-4b62-9454-67c3ad7f983e
	I1212 23:31:13.291489  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:13.291497  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:13.291505  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:13.291512  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:13.292084  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:13.461845  160181 request.go:629] Waited for 169.39039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:31:13.461946  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:31:13.461952  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:13.461959  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:13.461966  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:13.464875  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:13.464894  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:13.464901  160181 round_trippers.go:580]     Audit-Id: eb322132-910f-4954-9168-e479e6d16703
	I1212 23:31:13.464906  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:13.464911  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:13.464916  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:13.464921  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:13.464926  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:13 GMT
	I1212 23:31:13.465078  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"815","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 23:31:13.660959  160181 request.go:629] Waited for 195.318424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:13.661039  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:13.661044  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:13.661055  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:13.661064  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:13.663860  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:13.663884  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:13.663894  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:13 GMT
	I1212 23:31:13.663902  160181 round_trippers.go:580]     Audit-Id: 4a6f8fc4-1659-4b9b-b86a-3f9bc4322ed4
	I1212 23:31:13.663909  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:13.663916  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:13.663924  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:13.663932  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:13.664221  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:14.165376  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:31:14.165402  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:14.165411  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:14.165417  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:14.168546  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:14.168572  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:14.168582  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:14.168591  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:14 GMT
	I1212 23:31:14.168600  160181 round_trippers.go:580]     Audit-Id: efc1c50f-86e7-4a5e-8db8-d479d6a7589a
	I1212 23:31:14.168608  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:14.168631  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:14.168642  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:14.169064  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"815","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 23:31:14.169497  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:14.169511  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:14.169519  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:14.169525  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:14.172292  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:14.172310  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:14.172319  160181 round_trippers.go:580]     Audit-Id: aaa20b3b-4b1c-4d02-a1b8-263c792f05d1
	I1212 23:31:14.172326  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:14.172333  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:14.172344  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:14.172356  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:14.172371  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:14 GMT
	I1212 23:31:14.172506  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:14.665242  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:31:14.665308  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:14.665328  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:14.665338  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:14.669905  160181 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:31:14.669924  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:14.669931  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:14.669936  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:14.669941  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:14.669946  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:14 GMT
	I1212 23:31:14.669951  160181 round_trippers.go:580]     Audit-Id: c51e4cbc-85a2-48fd-a1b0-063b63b7876d
	I1212 23:31:14.669956  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:14.671294  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"815","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 23:31:14.671725  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:14.671739  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:14.671746  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:14.671753  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:14.677571  160181 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 23:31:14.677587  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:14.677593  160181 round_trippers.go:580]     Audit-Id: 0695f5b8-a84b-4335-92d2-e455c3e60d32
	I1212 23:31:14.677601  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:14.677609  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:14.677618  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:14.677630  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:14.677641  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:14 GMT
	I1212 23:31:14.678232  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:15.165300  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:31:15.165324  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:15.165332  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:15.165338  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:15.168587  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:15.168607  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:15.168614  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:15.168620  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:15.168625  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:15 GMT
	I1212 23:31:15.168632  160181 round_trippers.go:580]     Audit-Id: 6b15d28d-7314-4e24-a46c-68db15ea816b
	I1212 23:31:15.168640  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:15.168648  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:15.169243  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"815","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 23:31:15.169682  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:15.169696  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:15.169703  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:15.169709  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:15.171930  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:15.171946  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:15.171955  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:15.171963  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:15.171970  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:15 GMT
	I1212 23:31:15.171978  160181 round_trippers.go:580]     Audit-Id: e974f827-9d8b-4254-9eb1-c2dd81c6baa0
	I1212 23:31:15.171987  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:15.171996  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:15.172214  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:15.664906  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:31:15.664954  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:15.664968  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:15.664978  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:15.667997  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:15.668017  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:15.668025  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:15.668031  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:15.668039  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:15.668047  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:15 GMT
	I1212 23:31:15.668056  160181 round_trippers.go:580]     Audit-Id: 8e687914-ce16-42ac-89c1-08d05bfc2ef8
	I1212 23:31:15.668066  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:15.668455  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"815","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 23:31:15.668898  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:15.668909  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:15.668918  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:15.668928  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:15.671619  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:15.671652  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:15.671661  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:15.671672  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:15.671681  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:15.671691  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:15 GMT
	I1212 23:31:15.671700  160181 round_trippers.go:580]     Audit-Id: 02094f43-fef0-4d4d-97fd-aaee0a01d6aa
	I1212 23:31:15.671710  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:15.671879  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:15.672152  160181 pod_ready.go:102] pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace has status "Ready":"False"
	I1212 23:31:16.165611  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:31:16.165631  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:16.165639  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:16.165646  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:16.169275  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:16.169298  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:16.169308  160181 round_trippers.go:580]     Audit-Id: f0262352-0a47-43e6-9a2e-6a9821ade8ba
	I1212 23:31:16.169317  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:16.169324  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:16.169336  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:16.169344  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:16.169354  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:16 GMT
	I1212 23:31:16.169523  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"815","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 23:31:16.170109  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:16.170129  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:16.170139  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:16.170148  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:16.172292  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:16.172307  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:16.172314  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:16.172319  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:16.172324  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:16 GMT
	I1212 23:31:16.172329  160181 round_trippers.go:580]     Audit-Id: 1a942081-b054-4545-b27b-a62ba01c18df
	I1212 23:31:16.172334  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:16.172339  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:16.172533  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:16.665185  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:31:16.665212  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:16.665221  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:16.665227  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:16.668067  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:16.668089  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:16.668100  160181 round_trippers.go:580]     Audit-Id: ba61a654-96cd-469f-990e-dc13d1918571
	I1212 23:31:16.668107  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:16.668114  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:16.668121  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:16.668128  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:16.668137  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:16 GMT
	I1212 23:31:16.668327  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"815","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1212 23:31:16.668883  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:16.668903  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:16.668912  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:16.668927  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:16.671045  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:16.671063  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:16.671072  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:16.671080  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:16 GMT
	I1212 23:31:16.671087  160181 round_trippers.go:580]     Audit-Id: c94bf920-2ae7-4147-a6a9-a9e7ac7bfc16
	I1212 23:31:16.671094  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:16.671102  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:16.671110  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:16.671385  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:17.164857  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:31:17.164891  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:17.164905  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:17.164974  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:17.167824  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:17.167842  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:17.167849  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:17.167855  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:17.167884  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:17 GMT
	I1212 23:31:17.167895  160181 round_trippers.go:580]     Audit-Id: f3a0cd51-84a9-4e1a-84dd-d24455b75f2a
	I1212 23:31:17.167903  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:17.167911  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:17.168355  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"894","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1212 23:31:17.168955  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:17.168976  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:17.168987  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:17.169006  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:17.171298  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:17.171316  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:17.171325  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:17.171333  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:17 GMT
	I1212 23:31:17.171341  160181 round_trippers.go:580]     Audit-Id: f3589cfd-585f-426b-bfc6-4105ad61b89e
	I1212 23:31:17.171348  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:17.171356  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:17.171369  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:17.171656  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:17.171965  160181 pod_ready.go:92] pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace has status "Ready":"True"
	I1212 23:31:17.171982  160181 pod_ready.go:81] duration metric: took 3.888109784s waiting for pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:17.171995  160181 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:17.172043  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:31:17.172052  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:17.172062  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:17.172073  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:17.174739  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:17.174754  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:17.174764  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:17.174772  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:17.174781  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:17 GMT
	I1212 23:31:17.174790  160181 round_trippers.go:580]     Audit-Id: 6c18fa97-0138-451e-82f6-d646891bea8f
	I1212 23:31:17.174803  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:17.174808  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:17.174933  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"809","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1212 23:31:17.175375  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:17.175393  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:17.175404  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:17.175413  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:17.177466  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:17.177482  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:17.177491  160181 round_trippers.go:580]     Audit-Id: c9865f04-7505-4e82-a938-d7a9a72c99aa
	I1212 23:31:17.177499  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:17.177507  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:17.177516  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:17.177525  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:17.177540  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:17 GMT
	I1212 23:31:17.177640  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:17.177944  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:31:17.177958  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:17.177968  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:17.177977  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:17.179780  160181 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:31:17.179797  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:17.179803  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:17.179808  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:17.179813  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:17.179819  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:17 GMT
	I1212 23:31:17.179831  160181 round_trippers.go:580]     Audit-Id: 0d8039f8-9829-42a9-9c6d-74207db4511a
	I1212 23:31:17.179839  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:17.180033  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"809","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1212 23:31:17.261606  160181 request.go:629] Waited for 81.257921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:17.261702  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:17.261716  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:17.261730  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:17.261744  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:17.264395  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:17.264420  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:17.264444  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:17.264453  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:17.264462  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:17 GMT
	I1212 23:31:17.264470  160181 round_trippers.go:580]     Audit-Id: f27d9109-b872-403d-a476-853f564af2ef
	I1212 23:31:17.264487  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:17.264503  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:17.264808  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:17.765684  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:31:17.765709  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:17.765718  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:17.765723  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:17.769056  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:17.769079  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:17.769089  160181 round_trippers.go:580]     Audit-Id: 002a7265-8d25-4e4a-b0b6-84ec4eeb29b2
	I1212 23:31:17.769097  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:17.769104  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:17.769119  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:17.769137  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:17.769152  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:17 GMT
	I1212 23:31:17.769555  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"809","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1212 23:31:17.769931  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:17.769943  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:17.769950  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:17.769956  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:17.772835  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:17.772853  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:17.772859  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:17.772864  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:17.772869  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:17.772874  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:17 GMT
	I1212 23:31:17.772882  160181 round_trippers.go:580]     Audit-Id: 8e09f852-1c55-4f16-b017-bf977ac0213c
	I1212 23:31:17.772890  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:17.773203  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:18.265349  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:31:18.265373  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:18.265381  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:18.265387  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:18.268791  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:18.268812  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:18.268819  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:18.268825  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:18.268830  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:18 GMT
	I1212 23:31:18.268835  160181 round_trippers.go:580]     Audit-Id: 69806031-3258-481b-baee-9f9f02e9c8e3
	I1212 23:31:18.268840  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:18.268845  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:18.269244  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"809","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1212 23:31:18.269642  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:18.269655  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:18.269662  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:18.269667  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:18.271955  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:18.271974  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:18.271983  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:18.271992  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:18.271998  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:18.272012  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:18.272025  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:18 GMT
	I1212 23:31:18.272035  160181 round_trippers.go:580]     Audit-Id: 53110359-3d07-4fd1-993e-585d2ac125c1
	I1212 23:31:18.272297  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:18.765974  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:31:18.766006  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:18.766014  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:18.766021  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:18.768985  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:18.769010  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:18.769020  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:18.769030  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:18.769035  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:18 GMT
	I1212 23:31:18.769040  160181 round_trippers.go:580]     Audit-Id: a2421382-0bf0-413f-b321-e2f556ba99f4
	I1212 23:31:18.769045  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:18.769050  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:18.769313  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"809","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1212 23:31:18.769696  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:18.769714  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:18.769725  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:18.769739  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:18.771885  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:18.771899  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:18.771905  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:18.771910  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:18 GMT
	I1212 23:31:18.771917  160181 round_trippers.go:580]     Audit-Id: 689a09ad-c3e5-4272-abfd-8c278bd89e19
	I1212 23:31:18.771925  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:18.771936  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:18.771945  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:18.772173  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:19.265780  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:31:19.265811  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:19.265827  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:19.265834  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:19.270672  160181 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:31:19.270695  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:19.270724  160181 round_trippers.go:580]     Audit-Id: 2f1a2892-b614-432a-83b9-0e333fd4e737
	I1212 23:31:19.270733  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:19.270743  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:19.270759  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:19.270768  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:19.270781  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:19 GMT
	I1212 23:31:19.271259  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"809","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1212 23:31:19.271690  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:19.271707  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:19.271715  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:19.271721  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:19.273600  160181 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:31:19.273612  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:19.273618  160181 round_trippers.go:580]     Audit-Id: 2eb61946-e0ae-47d0-a2ac-e33c599da395
	I1212 23:31:19.273623  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:19.273628  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:19.273633  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:19.273638  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:19.273651  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:19 GMT
	I1212 23:31:19.274091  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:19.274457  160181 pod_ready.go:102] pod "etcd-multinode-510563" in "kube-system" namespace has status "Ready":"False"
	I1212 23:31:19.765741  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:31:19.765769  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:19.765781  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:19.765794  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:19.769074  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:19.769095  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:19.769103  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:19.769108  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:19.769113  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:19.769119  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:19.769124  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:19 GMT
	I1212 23:31:19.769128  160181 round_trippers.go:580]     Audit-Id: bf81f7e4-97c4-4424-b65c-fdb592884a3c
	I1212 23:31:19.769381  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"809","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1212 23:31:19.769940  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:19.769963  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:19.769980  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:19.769989  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:19.772402  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:19.772415  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:19.772422  160181 round_trippers.go:580]     Audit-Id: 633c3892-0747-4e95-a359-79d7cb427e04
	I1212 23:31:19.772427  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:19.772445  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:19.772453  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:19.772461  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:19.772471  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:19 GMT
	I1212 23:31:19.772629  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:20.265293  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:31:20.265317  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:20.265328  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:20.265337  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:20.268530  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:20.268551  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:20.268558  160181 round_trippers.go:580]     Audit-Id: 2ecd1b1a-52f3-43c0-a698-f6cfc03c0467
	I1212 23:31:20.268564  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:20.268569  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:20.268574  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:20.268579  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:20.268584  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:20 GMT
	I1212 23:31:20.269120  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"809","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1212 23:31:20.269504  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:20.269521  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:20.269532  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:20.269541  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:20.272013  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:20.272033  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:20.272042  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:20.272049  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:20.272057  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:20.272064  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:20 GMT
	I1212 23:31:20.272069  160181 round_trippers.go:580]     Audit-Id: cfc63323-205a-42b9-878e-d3e688bfa97b
	I1212 23:31:20.272074  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:20.272233  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:20.765748  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:31:20.765777  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:20.765787  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:20.765795  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:20.768960  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:20.768984  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:20.769012  160181 round_trippers.go:580]     Audit-Id: 4485f875-868f-4e26-979b-2420ecb2c78d
	I1212 23:31:20.769025  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:20.769036  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:20.769048  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:20.769054  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:20.769060  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:20 GMT
	I1212 23:31:20.769217  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"809","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1212 23:31:20.769592  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:20.769605  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:20.769612  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:20.769617  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:20.772004  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:20.772021  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:20.772029  160181 round_trippers.go:580]     Audit-Id: 8908cb2c-0f80-4f58-abd5-9ea48ad8bd11
	I1212 23:31:20.772034  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:20.772039  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:20.772044  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:20.772048  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:20.772053  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:20 GMT
	I1212 23:31:20.772323  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:21.265669  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:31:21.265695  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:21.265703  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:21.265709  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:21.269075  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:21.269094  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:21.269101  160181 round_trippers.go:580]     Audit-Id: 889e617f-87e9-4374-8198-2c17f9c9c0d8
	I1212 23:31:21.269107  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:21.269120  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:21.269127  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:21.269134  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:21.269145  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:21 GMT
	I1212 23:31:21.269333  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"809","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1212 23:31:21.269742  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:21.269755  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:21.269762  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:21.269768  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:21.272270  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:21.272288  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:21.272295  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:21 GMT
	I1212 23:31:21.272301  160181 round_trippers.go:580]     Audit-Id: eb272207-4304-41cb-9c99-356732719450
	I1212 23:31:21.272314  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:21.272325  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:21.272338  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:21.272347  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:21.272519  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:21.766319  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:31:21.766349  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:21.766362  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:21.766384  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:21.769393  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:21.769417  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:21.769424  160181 round_trippers.go:580]     Audit-Id: f15842ae-2f9c-419b-ba91-29a05422df03
	I1212 23:31:21.769430  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:21.769438  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:21.769443  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:21.769462  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:21.769472  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:21 GMT
	I1212 23:31:21.770035  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"809","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1212 23:31:21.770413  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:21.770425  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:21.770476  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:21.770487  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:21.772481  160181 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:31:21.772500  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:21.772507  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:21.772512  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:21 GMT
	I1212 23:31:21.772519  160181 round_trippers.go:580]     Audit-Id: 7634dc0d-060c-4049-af6b-98366f1b82f3
	I1212 23:31:21.772528  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:21.772535  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:21.772547  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:21.772812  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:21.773092  160181 pod_ready.go:102] pod "etcd-multinode-510563" in "kube-system" namespace has status "Ready":"False"
	I1212 23:31:22.265510  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:31:22.265539  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:22.265548  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:22.265555  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:22.268749  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:22.268774  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:22.268785  160181 round_trippers.go:580]     Audit-Id: e36c5405-3aab-4869-8e2e-cb4c6a71cbd5
	I1212 23:31:22.268794  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:22.268804  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:22.268813  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:22.268819  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:22.268825  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:22 GMT
	I1212 23:31:22.268979  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"917","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1212 23:31:22.269401  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:22.269417  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:22.269424  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:22.269430  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:22.273025  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:22.273047  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:22.273067  160181 round_trippers.go:580]     Audit-Id: 96bcd857-526c-4ee3-8578-bfb24cc44ef6
	I1212 23:31:22.273074  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:22.273079  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:22.273087  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:22.273092  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:22.273097  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:22 GMT
	I1212 23:31:22.273447  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:22.273727  160181 pod_ready.go:92] pod "etcd-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:31:22.273742  160181 pod_ready.go:81] duration metric: took 5.101740579s waiting for pod "etcd-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:22.273757  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:22.273807  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-510563
	I1212 23:31:22.273815  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:22.273823  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:22.273829  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:22.275891  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:22.275911  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:22.275922  160181 round_trippers.go:580]     Audit-Id: 7b7d416f-dfde-43ba-9325-f63f5c295c24
	I1212 23:31:22.275931  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:22.275944  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:22.275951  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:22.275960  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:22.275965  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:22 GMT
	I1212 23:31:22.276146  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-510563","namespace":"kube-system","uid":"e8a8ed00-d13d-44f0-b7d6-b42bf1342d95","resourceVersion":"900","creationTimestamp":"2023-12-12T23:20:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.38:8443","kubernetes.io/config.hash":"4b970951c1b4ca2bc525afa7c2eb2fef","kubernetes.io/config.mirror":"4b970951c1b4ca2bc525afa7c2eb2fef","kubernetes.io/config.seen":"2023-12-12T23:20:27.932579600Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1212 23:31:22.276527  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:22.276542  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:22.276549  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:22.276555  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:22.278526  160181 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:31:22.278547  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:22.278557  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:22.278574  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:22.278589  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:22.278598  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:22 GMT
	I1212 23:31:22.278609  160181 round_trippers.go:580]     Audit-Id: 1909ba5b-7fc2-4925-904d-0934cd79d647
	I1212 23:31:22.278618  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:22.278847  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:22.279220  160181 pod_ready.go:92] pod "kube-apiserver-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:31:22.279242  160181 pod_ready.go:81] duration metric: took 5.476754ms waiting for pod "kube-apiserver-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:22.279255  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:22.279307  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-510563
	I1212 23:31:22.279315  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:22.279326  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:22.279334  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:22.282388  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:22.282404  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:22.282410  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:22 GMT
	I1212 23:31:22.282416  160181 round_trippers.go:580]     Audit-Id: 9ee8ee0a-e313-4c00-813b-20e8db761c58
	I1212 23:31:22.282420  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:22.282428  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:22.282432  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:22.282437  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:22.282615  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-510563","namespace":"kube-system","uid":"efdc7f68-25d6-4f6a-ab8f-1dec43407375","resourceVersion":"887","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"50f588add554ab298cca0792048dbecc","kubernetes.io/config.mirror":"50f588add554ab298cca0792048dbecc","kubernetes.io/config.seen":"2023-12-12T23:20:36.354954910Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1212 23:31:22.283136  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:22.283154  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:22.283164  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:22.283173  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:22.285941  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:22.285958  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:22.285964  160181 round_trippers.go:580]     Audit-Id: 31589ec2-5775-4be6-bcdc-1ce0d616065b
	I1212 23:31:22.285971  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:22.285979  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:22.285989  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:22.285997  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:22.286006  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:22 GMT
	I1212 23:31:22.286790  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:22.287058  160181 pod_ready.go:92] pod "kube-controller-manager-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:31:22.287074  160181 pod_ready.go:81] duration metric: took 7.808362ms waiting for pod "kube-controller-manager-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:22.287082  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fbk65" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:22.287154  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fbk65
	I1212 23:31:22.287164  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:22.287170  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:22.287176  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:22.289486  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:22.289513  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:22.289521  160181 round_trippers.go:580]     Audit-Id: 8dc5ab3a-6a6e-4a57-ad47-94289db0bfd9
	I1212 23:31:22.289527  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:22.289532  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:22.289536  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:22.289541  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:22.289546  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:22 GMT
	I1212 23:31:22.289858  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fbk65","generateName":"kube-proxy-","namespace":"kube-system","uid":"478c2dce-ac51-47ac-9d34-20dc7c331056","resourceVersion":"742","creationTimestamp":"2023-12-12T23:22:27Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:22:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1212 23:31:22.461613  160181 request.go:629] Waited for 171.417022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m03
	I1212 23:31:22.461706  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m03
	I1212 23:31:22.461718  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:22.461726  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:22.461731  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:22.464453  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:22.464480  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:22.464491  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:22.464498  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:22.464503  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:22.464509  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:22.464521  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:22 GMT
	I1212 23:31:22.464526  160181 round_trippers.go:580]     Audit-Id: 9118a86f-7023-49a4-9f65-96b07032725e
	I1212 23:31:22.464691  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m03","uid":"6de0e5a4-53e7-4397-9be8-0053fa116498","resourceVersion":"776","creationTimestamp":"2023-12-12T23:23:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_23_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:23:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I1212 23:31:22.464977  160181 pod_ready.go:92] pod "kube-proxy-fbk65" in "kube-system" namespace has status "Ready":"True"
	I1212 23:31:22.464996  160181 pod_ready.go:81] duration metric: took 177.904973ms waiting for pod "kube-proxy-fbk65" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:22.465009  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hspw8" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:22.661513  160181 request.go:629] Waited for 196.40735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hspw8
	I1212 23:31:22.661567  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hspw8
	I1212 23:31:22.661572  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:22.661588  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:22.661594  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:22.664528  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:22.664556  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:22.664567  160181 round_trippers.go:580]     Audit-Id: 87d377f2-d54d-40ad-9ac9-8752f3f16baf
	I1212 23:31:22.664575  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:22.664582  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:22.664589  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:22.664597  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:22.664604  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:22 GMT
	I1212 23:31:22.664931  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hspw8","generateName":"kube-proxy-","namespace":"kube-system","uid":"a2255be6-8705-40cd-8f35-a3e82906190c","resourceVersion":"855","creationTimestamp":"2023-12-12T23:20:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1212 23:31:22.861620  160181 request.go:629] Waited for 196.288584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:22.861679  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:22.861683  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:22.861692  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:22.861698  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:22.864953  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:22.864974  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:22.864982  160181 round_trippers.go:580]     Audit-Id: 42608ae9-07f4-4e99-a0f8-739212118524
	I1212 23:31:22.864990  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:22.864998  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:22.865006  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:22.865013  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:22.865020  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:22 GMT
	I1212 23:31:22.865148  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:22.865480  160181 pod_ready.go:92] pod "kube-proxy-hspw8" in "kube-system" namespace has status "Ready":"True"
	I1212 23:31:22.865494  160181 pod_ready.go:81] duration metric: took 400.479793ms waiting for pod "kube-proxy-hspw8" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:22.865504  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-msx8s" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:23.061831  160181 request.go:629] Waited for 196.273134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msx8s
	I1212 23:31:23.061909  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msx8s
	I1212 23:31:23.061914  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:23.061921  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:23.061927  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:23.064893  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:23.064913  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:23.064920  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:23.064926  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:23.064931  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:23 GMT
	I1212 23:31:23.064936  160181 round_trippers.go:580]     Audit-Id: 2f1de886-a9ef-4669-8759-ae6234668c62
	I1212 23:31:23.064941  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:23.064953  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:23.065349  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-msx8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"f41b9a6d-8132-45a6-9847-5a762664b008","resourceVersion":"525","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1212 23:31:23.261010  160181 request.go:629] Waited for 195.260979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:31:23.261119  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:31:23.261131  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:23.261143  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:23.261153  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:23.263902  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:23.263924  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:23.263931  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:23 GMT
	I1212 23:31:23.263936  160181 round_trippers.go:580]     Audit-Id: 301616fd-a755-4973-a3a8-9a994152e127
	I1212 23:31:23.263943  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:23.263952  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:23.263960  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:23.263969  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:23.264548  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"d2556948-0b22-4680-ae18-714b42dd72a0","resourceVersion":"772","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_23_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 4236 chars]
	I1212 23:31:23.264861  160181 pod_ready.go:92] pod "kube-proxy-msx8s" in "kube-system" namespace has status "Ready":"True"
	I1212 23:31:23.264878  160181 pod_ready.go:81] duration metric: took 399.369142ms waiting for pod "kube-proxy-msx8s" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:23.264887  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:23.461313  160181 request.go:629] Waited for 196.365954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-510563
	I1212 23:31:23.461395  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-510563
	I1212 23:31:23.461403  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:23.461412  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:23.461418  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:23.464147  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:23.464165  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:23.464171  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:23.464177  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:23.464182  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:23.464187  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:23.464192  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:23 GMT
	I1212 23:31:23.464197  160181 round_trippers.go:580]     Audit-Id: 1688b2d5-4599-4e27-8429-98fb94ebc5ee
	I1212 23:31:23.464649  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-510563","namespace":"kube-system","uid":"044da73c-9466-4a43-b283-5f4b9cc04df9","resourceVersion":"895","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fdb335c77d5fb1581ea23fa0adf419e9","kubernetes.io/config.mirror":"fdb335c77d5fb1581ea23fa0adf419e9","kubernetes.io/config.seen":"2023-12-12T23:20:36.354955844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1212 23:31:23.661386  160181 request.go:629] Waited for 196.368119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:23.661471  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:31:23.661478  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:23.661488  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:23.661498  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:23.664865  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:23.664888  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:23.664897  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:23.664904  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:23 GMT
	I1212 23:31:23.664911  160181 round_trippers.go:580]     Audit-Id: 90c287e1-db5f-4e8a-9059-d869eb748354
	I1212 23:31:23.664918  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:23.664926  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:23.664934  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:23.665118  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1212 23:31:23.665469  160181 pod_ready.go:92] pod "kube-scheduler-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:31:23.665486  160181 pod_ready.go:81] duration metric: took 400.591693ms waiting for pod "kube-scheduler-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:31:23.665498  160181 pod_ready.go:38] duration metric: took 10.395610172s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:31:23.665517  160181 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:31:23.665576  160181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:31:23.679373  160181 command_runner.go:130] > 1066
	I1212 23:31:23.679410  160181 api_server.go:72] duration metric: took 11.786507929s to wait for apiserver process to appear ...
	I1212 23:31:23.679419  160181 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:31:23.679438  160181 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I1212 23:31:23.684188  160181 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I1212 23:31:23.684238  160181 round_trippers.go:463] GET https://192.168.39.38:8443/version
	I1212 23:31:23.684257  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:23.684265  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:23.684274  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:23.685483  160181 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:31:23.685498  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:23.685505  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:23.685510  160181 round_trippers.go:580]     Content-Length: 264
	I1212 23:31:23.685515  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:23 GMT
	I1212 23:31:23.685520  160181 round_trippers.go:580]     Audit-Id: c89632a4-7618-4ada-bec1-45e1c5a26bb1
	I1212 23:31:23.685525  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:23.685530  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:23.685536  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:23.685551  160181 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 23:31:23.685603  160181 api_server.go:141] control plane version: v1.28.4
	I1212 23:31:23.685620  160181 api_server.go:131] duration metric: took 6.194635ms to wait for apiserver health ...
	I1212 23:31:23.685627  160181 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:31:23.860985  160181 request.go:629] Waited for 175.273282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:31:23.861047  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:31:23.861052  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:23.861060  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:23.861066  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:23.865882  160181 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:31:23.865908  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:23.865917  160181 round_trippers.go:580]     Audit-Id: 2b0b3be3-0384-42dc-a672-df14b08ea3f8
	I1212 23:31:23.865926  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:23.865935  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:23.865944  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:23.865953  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:23.865960  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:23 GMT
	I1212 23:31:23.867775  160181 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"917"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"894","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81846 chars]
	I1212 23:31:23.870190  160181 system_pods.go:59] 12 kube-system pods found
	I1212 23:31:23.870210  160181 system_pods.go:61] "coredns-5dd5756b68-zcxks" [503de693-19d6-45c5-97c6-3b8e5657bfee] Running
	I1212 23:31:23.870215  160181 system_pods.go:61] "etcd-multinode-510563" [2748a67b-24f2-4b90-bf95-eb56755a397a] Running
	I1212 23:31:23.870224  160181 system_pods.go:61] "kindnet-5v7sf" [ed1b67f7-1607-4266-9a99-dd7e084a0abc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 23:31:23.870229  160181 system_pods.go:61] "kindnet-lqdxw" [56d8e0e6-679d-47bd-af1f-c1b8d8018eb5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 23:31:23.870233  160181 system_pods.go:61] "kindnet-v4js8" [cfe24f85-472c-4ef2-9a48-9e3647cc8feb] Running
	I1212 23:31:23.870238  160181 system_pods.go:61] "kube-apiserver-multinode-510563" [e8a8ed00-d13d-44f0-b7d6-b42bf1342d95] Running
	I1212 23:31:23.870242  160181 system_pods.go:61] "kube-controller-manager-multinode-510563" [efdc7f68-25d6-4f6a-ab8f-1dec43407375] Running
	I1212 23:31:23.870245  160181 system_pods.go:61] "kube-proxy-fbk65" [478c2dce-ac51-47ac-9d34-20dc7c331056] Running
	I1212 23:31:23.870249  160181 system_pods.go:61] "kube-proxy-hspw8" [a2255be6-8705-40cd-8f35-a3e82906190c] Running
	I1212 23:31:23.870256  160181 system_pods.go:61] "kube-proxy-msx8s" [f41b9a6d-8132-45a6-9847-5a762664b008] Running
	I1212 23:31:23.870262  160181 system_pods.go:61] "kube-scheduler-multinode-510563" [044da73c-9466-4a43-b283-5f4b9cc04df9] Running
	I1212 23:31:23.870268  160181 system_pods.go:61] "storage-provisioner" [cb4f186a-9bb9-488f-8a74-6e01f352fc05] Running
	I1212 23:31:23.870274  160181 system_pods.go:74] duration metric: took 184.639527ms to wait for pod list to return data ...
	I1212 23:31:23.870280  160181 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:31:24.061803  160181 request.go:629] Waited for 191.400405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:31:24.061878  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/default/serviceaccounts
	I1212 23:31:24.061884  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:24.061891  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:24.061897  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:24.064929  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:31:24.064946  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:24.064952  160181 round_trippers.go:580]     Audit-Id: d4c3eb69-9d37-485c-b8fe-30ce602158dc
	I1212 23:31:24.064957  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:24.064963  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:24.064968  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:24.064973  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:24.064977  160181 round_trippers.go:580]     Content-Length: 261
	I1212 23:31:24.064982  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:24 GMT
	I1212 23:31:24.064997  160181 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"917"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"477a27c6-8724-40b2-af7e-afc80b75b08c","resourceVersion":"350","creationTimestamp":"2023-12-12T23:20:48Z"}}]}
	I1212 23:31:24.065185  160181 default_sa.go:45] found service account: "default"
	I1212 23:31:24.065202  160181 default_sa.go:55] duration metric: took 194.918002ms for default service account to be created ...
	I1212 23:31:24.065213  160181 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:31:24.261697  160181 request.go:629] Waited for 196.416105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:31:24.261782  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:31:24.261790  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:24.261797  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:24.261807  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:24.266762  160181 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:31:24.266788  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:24.266798  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:24.266804  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:24.266813  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:24.266819  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:24.266826  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:24 GMT
	I1212 23:31:24.266843  160181 round_trippers.go:580]     Audit-Id: f4a34198-8d28-4dc1-b1a1-884716c2c2cd
	I1212 23:31:24.267728  160181 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"917"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"894","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81846 chars]
	I1212 23:31:24.270487  160181 system_pods.go:86] 12 kube-system pods found
	I1212 23:31:24.270525  160181 system_pods.go:89] "coredns-5dd5756b68-zcxks" [503de693-19d6-45c5-97c6-3b8e5657bfee] Running
	I1212 23:31:24.270534  160181 system_pods.go:89] "etcd-multinode-510563" [2748a67b-24f2-4b90-bf95-eb56755a397a] Running
	I1212 23:31:24.270545  160181 system_pods.go:89] "kindnet-5v7sf" [ed1b67f7-1607-4266-9a99-dd7e084a0abc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 23:31:24.270556  160181 system_pods.go:89] "kindnet-lqdxw" [56d8e0e6-679d-47bd-af1f-c1b8d8018eb5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 23:31:24.270570  160181 system_pods.go:89] "kindnet-v4js8" [cfe24f85-472c-4ef2-9a48-9e3647cc8feb] Running
	I1212 23:31:24.270582  160181 system_pods.go:89] "kube-apiserver-multinode-510563" [e8a8ed00-d13d-44f0-b7d6-b42bf1342d95] Running
	I1212 23:31:24.270590  160181 system_pods.go:89] "kube-controller-manager-multinode-510563" [efdc7f68-25d6-4f6a-ab8f-1dec43407375] Running
	I1212 23:31:24.270600  160181 system_pods.go:89] "kube-proxy-fbk65" [478c2dce-ac51-47ac-9d34-20dc7c331056] Running
	I1212 23:31:24.270607  160181 system_pods.go:89] "kube-proxy-hspw8" [a2255be6-8705-40cd-8f35-a3e82906190c] Running
	I1212 23:31:24.270614  160181 system_pods.go:89] "kube-proxy-msx8s" [f41b9a6d-8132-45a6-9847-5a762664b008] Running
	I1212 23:31:24.270621  160181 system_pods.go:89] "kube-scheduler-multinode-510563" [044da73c-9466-4a43-b283-5f4b9cc04df9] Running
	I1212 23:31:24.270631  160181 system_pods.go:89] "storage-provisioner" [cb4f186a-9bb9-488f-8a74-6e01f352fc05] Running
	I1212 23:31:24.270641  160181 system_pods.go:126] duration metric: took 205.420763ms to wait for k8s-apps to be running ...
	I1212 23:31:24.270656  160181 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:31:24.270719  160181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:31:24.285575  160181 system_svc.go:56] duration metric: took 14.913654ms WaitForService to wait for kubelet.
	I1212 23:31:24.285600  160181 kubeadm.go:581] duration metric: took 12.392700588s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:31:24.285616  160181 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:31:24.460983  160181 request.go:629] Waited for 175.280786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes
	I1212 23:31:24.461058  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes
	I1212 23:31:24.461065  160181 round_trippers.go:469] Request Headers:
	I1212 23:31:24.461076  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:31:24.461087  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:31:24.463779  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:31:24.463797  160181 round_trippers.go:577] Response Headers:
	I1212 23:31:24.463804  160181 round_trippers.go:580]     Audit-Id: ac5b8202-0e0e-42f8-afcd-375346efadf1
	I1212 23:31:24.463810  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:31:24.463832  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:31:24.463841  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:31:24.463850  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:31:24.463861  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:31:24 GMT
	I1212 23:31:24.464273  160181 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"917"},"items":[{"metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"888","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16179 chars]
	I1212 23:31:24.464881  160181 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:31:24.464902  160181 node_conditions.go:123] node cpu capacity is 2
	I1212 23:31:24.464912  160181 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:31:24.464916  160181 node_conditions.go:123] node cpu capacity is 2
	I1212 23:31:24.464920  160181 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:31:24.464927  160181 node_conditions.go:123] node cpu capacity is 2
	I1212 23:31:24.464957  160181 node_conditions.go:105] duration metric: took 179.334042ms to run NodePressure ...
	I1212 23:31:24.464970  160181 start.go:228] waiting for startup goroutines ...
	I1212 23:31:24.464978  160181 start.go:233] waiting for cluster config update ...
	I1212 23:31:24.464986  160181 start.go:242] writing updated cluster config ...
	I1212 23:31:24.465394  160181 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:31:24.465484  160181 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/config.json ...
	I1212 23:31:24.468663  160181 out.go:177] * Starting worker node multinode-510563-m02 in cluster multinode-510563
	I1212 23:31:24.469889  160181 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:31:24.469911  160181 cache.go:56] Caching tarball of preloaded images
	I1212 23:31:24.470027  160181 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:31:24.470043  160181 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 23:31:24.470173  160181 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/config.json ...
	I1212 23:31:24.470338  160181 start.go:365] acquiring machines lock for multinode-510563-m02: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:31:24.470382  160181 start.go:369] acquired machines lock for "multinode-510563-m02" in 26.04µs
	I1212 23:31:24.470395  160181 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:31:24.470402  160181 fix.go:54] fixHost starting: m02
	I1212 23:31:24.470661  160181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:31:24.470681  160181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:31:24.485549  160181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I1212 23:31:24.486048  160181 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:31:24.486509  160181 main.go:141] libmachine: Using API Version  1
	I1212 23:31:24.486528  160181 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:31:24.486835  160181 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:31:24.487010  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:31:24.487175  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetState
	I1212 23:31:24.488650  160181 fix.go:102] recreateIfNeeded on multinode-510563-m02: state=Running err=<nil>
	W1212 23:31:24.488666  160181 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:31:24.490566  160181 out.go:177] * Updating the running kvm2 "multinode-510563-m02" VM ...
	I1212 23:31:24.491965  160181 machine.go:88] provisioning docker machine ...
	I1212 23:31:24.491985  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:31:24.492183  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetMachineName
	I1212 23:31:24.492343  160181 buildroot.go:166] provisioning hostname "multinode-510563-m02"
	I1212 23:31:24.492359  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetMachineName
	I1212 23:31:24.492500  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:31:24.494795  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:31:24.495233  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:31:24.495254  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:31:24.495425  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:31:24.495603  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:31:24.495757  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:31:24.495880  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:31:24.496013  160181 main.go:141] libmachine: Using SSH client type: native
	I1212 23:31:24.496319  160181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1212 23:31:24.496332  160181 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-510563-m02 && echo "multinode-510563-m02" | sudo tee /etc/hostname
	I1212 23:31:24.627064  160181 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-510563-m02
	
	I1212 23:31:24.627089  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:31:24.630171  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:31:24.630546  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:31:24.630575  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:31:24.630743  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:31:24.630924  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:31:24.631065  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:31:24.631186  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:31:24.631367  160181 main.go:141] libmachine: Using SSH client type: native
	I1212 23:31:24.631684  160181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1212 23:31:24.631706  160181 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-510563-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-510563-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-510563-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:31:24.745141  160181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:31:24.745170  160181 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1212 23:31:24.745195  160181 buildroot.go:174] setting up certificates
	I1212 23:31:24.745204  160181 provision.go:83] configureAuth start
	I1212 23:31:24.745212  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetMachineName
	I1212 23:31:24.745525  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetIP
	I1212 23:31:24.748159  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:31:24.748669  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:31:24.748687  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:31:24.748874  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:31:24.751236  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:31:24.751602  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:31:24.751630  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:31:24.751773  160181 provision.go:138] copyHostCerts
	I1212 23:31:24.751803  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:31:24.751847  160181 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1212 23:31:24.751878  160181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:31:24.751962  160181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1212 23:31:24.752062  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:31:24.752088  160181 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1212 23:31:24.752098  160181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:31:24.752141  160181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1212 23:31:24.752204  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:31:24.752227  160181 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1212 23:31:24.752237  160181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:31:24.752276  160181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1212 23:31:24.752337  160181 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.multinode-510563-m02 san=[192.168.39.109 192.168.39.109 localhost 127.0.0.1 minikube multinode-510563-m02]
	I1212 23:31:24.965929  160181 provision.go:172] copyRemoteCerts
	I1212 23:31:24.966010  160181 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:31:24.966039  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:31:24.968860  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:31:24.969239  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:31:24.969270  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:31:24.969459  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:31:24.969662  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:31:24.969824  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:31:24.970000  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/id_rsa Username:docker}
	I1212 23:31:25.053391  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 23:31:25.053473  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:31:25.076482  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 23:31:25.076551  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 23:31:25.099352  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 23:31:25.099415  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:31:25.122213  160181 provision.go:86] duration metric: configureAuth took 376.998911ms
	I1212 23:31:25.122237  160181 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:31:25.122470  160181 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:31:25.122554  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:31:25.125100  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:31:25.125482  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:31:25.125512  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:31:25.125645  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:31:25.125836  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:31:25.126028  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:31:25.126183  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:31:25.126391  160181 main.go:141] libmachine: Using SSH client type: native
	I1212 23:31:25.126844  160181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1212 23:31:25.126862  160181 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:32:55.709952  160181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:32:55.709994  160181 machine.go:91] provisioned docker machine in 1m31.218009816s
	I1212 23:32:55.710007  160181 start.go:300] post-start starting for "multinode-510563-m02" (driver="kvm2")
	I1212 23:32:55.710021  160181 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:32:55.710047  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:32:55.710401  160181 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:32:55.710438  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:32:55.713182  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:32:55.713603  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:32:55.713625  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:32:55.713800  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:32:55.713996  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:32:55.714136  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:32:55.714346  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/id_rsa Username:docker}
	I1212 23:32:55.803590  160181 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:32:55.808083  160181 command_runner.go:130] > NAME=Buildroot
	I1212 23:32:55.808106  160181 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 23:32:55.808111  160181 command_runner.go:130] > ID=buildroot
	I1212 23:32:55.808116  160181 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:32:55.808134  160181 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:32:55.808173  160181 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:32:55.808191  160181 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1212 23:32:55.808265  160181 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1212 23:32:55.808368  160181 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1212 23:32:55.808391  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> /etc/ssl/certs/1435412.pem
	I1212 23:32:55.808541  160181 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:32:55.817864  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:32:55.841028  160181 start.go:303] post-start completed in 131.003697ms
	I1212 23:32:55.841058  160181 fix.go:56] fixHost completed within 1m31.370654007s
	I1212 23:32:55.841110  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:32:55.844074  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:32:55.844480  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:32:55.844516  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:32:55.844631  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:32:55.844820  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:32:55.844960  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:32:55.845094  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:32:55.845242  160181 main.go:141] libmachine: Using SSH client type: native
	I1212 23:32:55.845570  160181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1212 23:32:55.845581  160181 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:32:55.961197  160181 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423975.948824753
	
	I1212 23:32:55.961220  160181 fix.go:206] guest clock: 1702423975.948824753
	I1212 23:32:55.961229  160181 fix.go:219] Guest: 2023-12-12 23:32:55.948824753 +0000 UTC Remote: 2023-12-12 23:32:55.841062434 +0000 UTC m=+451.043792953 (delta=107.762319ms)
	I1212 23:32:55.961248  160181 fix.go:190] guest clock delta is within tolerance: 107.762319ms
	I1212 23:32:55.961254  160181 start.go:83] releasing machines lock for "multinode-510563-m02", held for 1m31.490863552s
	I1212 23:32:55.961281  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:32:55.961580  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetIP
	I1212 23:32:55.964029  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:32:55.964382  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:32:55.964406  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:32:55.966791  160181 out.go:177] * Found network options:
	I1212 23:32:55.968426  160181 out.go:177]   - NO_PROXY=192.168.39.38
	W1212 23:32:55.969782  160181 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:32:55.969828  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:32:55.970489  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:32:55.970676  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:32:55.970743  160181 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:32:55.970792  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	W1212 23:32:55.970897  160181 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:32:55.970976  160181 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:32:55.971001  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:32:55.973510  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:32:55.973849  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:32:55.973882  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:32:55.973910  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:32:55.974093  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:32:55.974248  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:32:55.974424  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:32:55.974454  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:32:55.974485  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:32:55.974603  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:32:55.974601  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/id_rsa Username:docker}
	I1212 23:32:55.974744  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:32:55.974897  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:32:55.975053  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/id_rsa Username:docker}
	I1212 23:32:56.204760  160181 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:32:56.204832  160181 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:32:56.210867  160181 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 23:32:56.210949  160181 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:32:56.211022  160181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:32:56.220548  160181 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 23:32:56.220576  160181 start.go:475] detecting cgroup driver to use...
	I1212 23:32:56.220666  160181 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:32:56.235318  160181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:32:56.248108  160181 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:32:56.248166  160181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:32:56.262248  160181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:32:56.275068  160181 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:32:56.438756  160181 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:32:56.577153  160181 docker.go:219] disabling docker service ...
	I1212 23:32:56.577227  160181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:32:56.592211  160181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:32:56.604891  160181 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:32:56.727080  160181 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:32:56.861673  160181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:32:56.873946  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:32:56.892668  160181 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 23:32:56.892704  160181 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:32:56.892760  160181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:32:56.903781  160181 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:32:56.903856  160181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:32:56.913854  160181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:32:56.923099  160181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:32:56.932231  160181 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:32:56.941776  160181 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:32:56.950232  160181 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:32:56.950287  160181 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:32:56.958585  160181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:32:57.087674  160181 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:32:57.468172  160181 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:32:57.468261  160181 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:32:57.474011  160181 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 23:32:57.474038  160181 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 23:32:57.474048  160181 command_runner.go:130] > Device: 16h/22d	Inode: 1234        Links: 1
	I1212 23:32:57.474059  160181 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:32:57.474067  160181 command_runner.go:130] > Access: 2023-12-12 23:32:57.389939162 +0000
	I1212 23:32:57.474078  160181 command_runner.go:130] > Modify: 2023-12-12 23:32:57.389939162 +0000
	I1212 23:32:57.474091  160181 command_runner.go:130] > Change: 2023-12-12 23:32:57.389939162 +0000
	I1212 23:32:57.474100  160181 command_runner.go:130] >  Birth: -
	I1212 23:32:57.474121  160181 start.go:543] Will wait 60s for crictl version
	I1212 23:32:57.474164  160181 ssh_runner.go:195] Run: which crictl
	I1212 23:32:57.478409  160181 command_runner.go:130] > /usr/bin/crictl
	I1212 23:32:57.478477  160181 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:32:57.519563  160181 command_runner.go:130] > Version:  0.1.0
	I1212 23:32:57.519581  160181 command_runner.go:130] > RuntimeName:  cri-o
	I1212 23:32:57.519586  160181 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 23:32:57.519591  160181 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 23:32:57.519771  160181 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:32:57.519832  160181 ssh_runner.go:195] Run: crio --version
	I1212 23:32:57.567223  160181 command_runner.go:130] > crio version 1.24.1
	I1212 23:32:57.567252  160181 command_runner.go:130] > Version:          1.24.1
	I1212 23:32:57.567263  160181 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 23:32:57.567270  160181 command_runner.go:130] > GitTreeState:     dirty
	I1212 23:32:57.567278  160181 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 23:32:57.567286  160181 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 23:32:57.567292  160181 command_runner.go:130] > Compiler:         gc
	I1212 23:32:57.567299  160181 command_runner.go:130] > Platform:         linux/amd64
	I1212 23:32:57.567307  160181 command_runner.go:130] > Linkmode:         dynamic
	I1212 23:32:57.567317  160181 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 23:32:57.567324  160181 command_runner.go:130] > SeccompEnabled:   true
	I1212 23:32:57.567331  160181 command_runner.go:130] > AppArmorEnabled:  false
	I1212 23:32:57.567414  160181 ssh_runner.go:195] Run: crio --version
	I1212 23:32:57.615689  160181 command_runner.go:130] > crio version 1.24.1
	I1212 23:32:57.615712  160181 command_runner.go:130] > Version:          1.24.1
	I1212 23:32:57.615719  160181 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 23:32:57.615723  160181 command_runner.go:130] > GitTreeState:     dirty
	I1212 23:32:57.615729  160181 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 23:32:57.615733  160181 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 23:32:57.615739  160181 command_runner.go:130] > Compiler:         gc
	I1212 23:32:57.615747  160181 command_runner.go:130] > Platform:         linux/amd64
	I1212 23:32:57.615755  160181 command_runner.go:130] > Linkmode:         dynamic
	I1212 23:32:57.615771  160181 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 23:32:57.615782  160181 command_runner.go:130] > SeccompEnabled:   true
	I1212 23:32:57.615789  160181 command_runner.go:130] > AppArmorEnabled:  false
	I1212 23:32:57.617824  160181 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:32:57.619312  160181 out.go:177]   - env NO_PROXY=192.168.39.38
	I1212 23:32:57.620892  160181 main.go:141] libmachine: (multinode-510563-m02) Calling .GetIP
	I1212 23:32:57.623413  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:32:57.623761  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:32:57.623794  160181 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:32:57.623995  160181 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 23:32:57.628288  160181 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1212 23:32:57.628320  160181 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563 for IP: 192.168.39.109
	I1212 23:32:57.628336  160181 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:32:57.628524  160181 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1212 23:32:57.628572  160181 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1212 23:32:57.628588  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:32:57.628615  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:32:57.628634  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:32:57.628653  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:32:57.628723  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1212 23:32:57.628758  160181 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1212 23:32:57.628774  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:32:57.628809  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:32:57.628841  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:32:57.628875  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1212 23:32:57.628932  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:32:57.628962  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:32:57.628981  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem -> /usr/share/ca-certificates/143541.pem
	I1212 23:32:57.628999  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> /usr/share/ca-certificates/1435412.pem
	I1212 23:32:57.629349  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:32:57.653699  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:32:57.675831  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:32:57.697694  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:32:57.720678  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:32:57.743373  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1212 23:32:57.765827  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1212 23:32:57.788849  160181 ssh_runner.go:195] Run: openssl version
	I1212 23:32:57.794855  160181 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 23:32:57.794918  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1212 23:32:57.805438  160181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1212 23:32:57.810037  160181 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1212 23:32:57.810296  160181 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1212 23:32:57.810353  160181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1212 23:32:57.815536  160181 command_runner.go:130] > 3ec20f2e
	I1212 23:32:57.815777  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:32:57.823777  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:32:57.832885  160181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:32:57.837222  160181 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:32:57.837310  160181 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:32:57.837355  160181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:32:57.842896  160181 command_runner.go:130] > b5213941
	I1212 23:32:57.842956  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:32:57.851150  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1212 23:32:57.860720  160181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1212 23:32:57.865094  160181 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1212 23:32:57.865154  160181 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1212 23:32:57.865188  160181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1212 23:32:57.870648  160181 command_runner.go:130] > 51391683
	I1212 23:32:57.870952  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1212 23:32:57.878929  160181 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:32:57.882972  160181 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:32:57.883188  160181 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:32:57.883277  160181 ssh_runner.go:195] Run: crio config
	I1212 23:32:57.933554  160181 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 23:32:57.933580  160181 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 23:32:57.933591  160181 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 23:32:57.933597  160181 command_runner.go:130] > #
	I1212 23:32:57.933609  160181 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 23:32:57.933621  160181 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 23:32:57.933634  160181 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 23:32:57.933647  160181 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 23:32:57.933665  160181 command_runner.go:130] > # reload'.
	I1212 23:32:57.933675  160181 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 23:32:57.933691  160181 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 23:32:57.933705  160181 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 23:32:57.933720  160181 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 23:32:57.933731  160181 command_runner.go:130] > [crio]
	I1212 23:32:57.933743  160181 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 23:32:57.933749  160181 command_runner.go:130] > # containers images, in this directory.
	I1212 23:32:57.933781  160181 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 23:32:57.933804  160181 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 23:32:57.933996  160181 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 23:32:57.934019  160181 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 23:32:57.934030  160181 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 23:32:57.934197  160181 command_runner.go:130] > storage_driver = "overlay"
	I1212 23:32:57.934217  160181 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 23:32:57.934228  160181 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 23:32:57.934237  160181 command_runner.go:130] > storage_option = [
	I1212 23:32:57.934678  160181 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 23:32:57.934718  160181 command_runner.go:130] > ]
	I1212 23:32:57.934735  160181 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 23:32:57.934744  160181 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 23:32:57.935429  160181 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 23:32:57.935448  160181 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 23:32:57.935458  160181 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 23:32:57.935464  160181 command_runner.go:130] > # always happen on a node reboot
	I1212 23:32:57.935469  160181 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 23:32:57.935483  160181 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 23:32:57.935497  160181 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 23:32:57.935513  160181 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 23:32:57.935526  160181 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 23:32:57.935537  160181 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 23:32:57.935546  160181 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 23:32:57.935554  160181 command_runner.go:130] > # internal_wipe = true
	I1212 23:32:57.935564  160181 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 23:32:57.935579  160181 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 23:32:57.935592  160181 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 23:32:57.935664  160181 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 23:32:57.935696  160181 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 23:32:57.935704  160181 command_runner.go:130] > [crio.api]
	I1212 23:32:57.935716  160181 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 23:32:57.935728  160181 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 23:32:57.935740  160181 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 23:32:57.935753  160181 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 23:32:57.935768  160181 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 23:32:57.935780  160181 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 23:32:57.935791  160181 command_runner.go:130] > # stream_port = "0"
	I1212 23:32:57.935803  160181 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 23:32:57.935813  160181 command_runner.go:130] > # stream_enable_tls = false
	I1212 23:32:57.935824  160181 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 23:32:57.935834  160181 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 23:32:57.935849  160181 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 23:32:57.935871  160181 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 23:32:57.935881  160181 command_runner.go:130] > # minutes.
	I1212 23:32:57.935892  160181 command_runner.go:130] > # stream_tls_cert = ""
	I1212 23:32:57.935903  160181 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 23:32:57.935916  160181 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 23:32:57.935926  160181 command_runner.go:130] > # stream_tls_key = ""
	I1212 23:32:57.935942  160181 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 23:32:57.935956  160181 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 23:32:57.935968  160181 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 23:32:57.935978  160181 command_runner.go:130] > # stream_tls_ca = ""
	I1212 23:32:57.935991  160181 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 23:32:57.936002  160181 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 23:32:57.936016  160181 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 23:32:57.936026  160181 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 23:32:57.936076  160181 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 23:32:57.936089  160181 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 23:32:57.936096  160181 command_runner.go:130] > [crio.runtime]
	I1212 23:32:57.936110  160181 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 23:32:57.936122  160181 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 23:32:57.936132  160181 command_runner.go:130] > # "nofile=1024:2048"
	I1212 23:32:57.936142  160181 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 23:32:57.936152  160181 command_runner.go:130] > # default_ulimits = [
	I1212 23:32:57.936160  160181 command_runner.go:130] > # ]
	I1212 23:32:57.936166  160181 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 23:32:57.936176  160181 command_runner.go:130] > # no_pivot = false
	I1212 23:32:57.936187  160181 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 23:32:57.936201  160181 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 23:32:57.936212  160181 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 23:32:57.936224  160181 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 23:32:57.936235  160181 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 23:32:57.936247  160181 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 23:32:57.936255  160181 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 23:32:57.936263  160181 command_runner.go:130] > # Cgroup setting for conmon
	I1212 23:32:57.936278  160181 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 23:32:57.936288  160181 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 23:32:57.936303  160181 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 23:32:57.936314  160181 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 23:32:57.936328  160181 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 23:32:57.936335  160181 command_runner.go:130] > conmon_env = [
	I1212 23:32:57.936343  160181 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 23:32:57.936351  160181 command_runner.go:130] > ]
	I1212 23:32:57.936361  160181 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 23:32:57.936374  160181 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 23:32:57.936386  160181 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 23:32:57.936397  160181 command_runner.go:130] > # default_env = [
	I1212 23:32:57.936403  160181 command_runner.go:130] > # ]
	I1212 23:32:57.936416  160181 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 23:32:57.936425  160181 command_runner.go:130] > # selinux = false
	I1212 23:32:57.936448  160181 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 23:32:57.936465  160181 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 23:32:57.936477  160181 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 23:32:57.936487  160181 command_runner.go:130] > # seccomp_profile = ""
	I1212 23:32:57.936499  160181 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 23:32:57.936516  160181 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 23:32:57.936526  160181 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 23:32:57.936534  160181 command_runner.go:130] > # which might increase security.
	I1212 23:32:57.936546  160181 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 23:32:57.936560  160181 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 23:32:57.936573  160181 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 23:32:57.936586  160181 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 23:32:57.936600  160181 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 23:32:57.936608  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:32:57.936641  160181 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 23:32:57.936658  160181 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 23:32:57.936669  160181 command_runner.go:130] > # the cgroup blockio controller.
	I1212 23:32:57.936680  160181 command_runner.go:130] > # blockio_config_file = ""
	I1212 23:32:57.936689  160181 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 23:32:57.936696  160181 command_runner.go:130] > # irqbalance daemon.
	I1212 23:32:57.937302  160181 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 23:32:57.937316  160181 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 23:32:57.937321  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:32:57.938038  160181 command_runner.go:130] > # rdt_config_file = ""
	I1212 23:32:57.938051  160181 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 23:32:57.938319  160181 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 23:32:57.938334  160181 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 23:32:57.938824  160181 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 23:32:57.938843  160181 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 23:32:57.938854  160181 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 23:32:57.938862  160181 command_runner.go:130] > # will be added.
	I1212 23:32:57.938988  160181 command_runner.go:130] > # default_capabilities = [
	I1212 23:32:57.939334  160181 command_runner.go:130] > # 	"CHOWN",
	I1212 23:32:57.939709  160181 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 23:32:57.939930  160181 command_runner.go:130] > # 	"FSETID",
	I1212 23:32:57.940154  160181 command_runner.go:130] > # 	"FOWNER",
	I1212 23:32:57.940653  160181 command_runner.go:130] > # 	"SETGID",
	I1212 23:32:57.940848  160181 command_runner.go:130] > # 	"SETUID",
	I1212 23:32:57.941063  160181 command_runner.go:130] > # 	"SETPCAP",
	I1212 23:32:57.942506  160181 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 23:32:57.942529  160181 command_runner.go:130] > # 	"KILL",
	I1212 23:32:57.942536  160181 command_runner.go:130] > # ]
	I1212 23:32:57.942546  160181 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 23:32:57.942554  160181 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 23:32:57.942564  160181 command_runner.go:130] > # default_sysctls = [
	I1212 23:32:57.942572  160181 command_runner.go:130] > # ]
	I1212 23:32:57.942584  160181 command_runner.go:130] > # List of devices on the host that a
	I1212 23:32:57.942593  160181 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 23:32:57.942603  160181 command_runner.go:130] > # allowed_devices = [
	I1212 23:32:57.942609  160181 command_runner.go:130] > # 	"/dev/fuse",
	I1212 23:32:57.942618  160181 command_runner.go:130] > # ]
	I1212 23:32:57.942626  160181 command_runner.go:130] > # List of additional devices. specified as
	I1212 23:32:57.942641  160181 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 23:32:57.942653  160181 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 23:32:57.942682  160181 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 23:32:57.942691  160181 command_runner.go:130] > # additional_devices = [
	I1212 23:32:57.942695  160181 command_runner.go:130] > # ]
	I1212 23:32:57.942700  160181 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 23:32:57.942704  160181 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 23:32:57.942708  160181 command_runner.go:130] > # 	"/etc/cdi",
	I1212 23:32:57.942712  160181 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 23:32:57.942718  160181 command_runner.go:130] > # ]
	I1212 23:32:57.942725  160181 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 23:32:57.942734  160181 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 23:32:57.942742  160181 command_runner.go:130] > # Defaults to false.
	I1212 23:32:57.942747  160181 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 23:32:57.942755  160181 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 23:32:57.942764  160181 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 23:32:57.942768  160181 command_runner.go:130] > # hooks_dir = [
	I1212 23:32:57.942775  160181 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 23:32:57.942779  160181 command_runner.go:130] > # ]
	I1212 23:32:57.942787  160181 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 23:32:57.942793  160181 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 23:32:57.942801  160181 command_runner.go:130] > # its default mounts from the following two files:
	I1212 23:32:57.942804  160181 command_runner.go:130] > #
	I1212 23:32:57.942810  160181 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 23:32:57.942818  160181 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 23:32:57.942824  160181 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 23:32:57.942829  160181 command_runner.go:130] > #
	I1212 23:32:57.942835  160181 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 23:32:57.942844  160181 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 23:32:57.942852  160181 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 23:32:57.942857  160181 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 23:32:57.942863  160181 command_runner.go:130] > #
	I1212 23:32:57.942867  160181 command_runner.go:130] > # default_mounts_file = ""
	I1212 23:32:57.942872  160181 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 23:32:57.942881  160181 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 23:32:57.942896  160181 command_runner.go:130] > pids_limit = 1024
	I1212 23:32:57.942904  160181 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 23:32:57.942910  160181 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 23:32:57.942918  160181 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 23:32:57.942927  160181 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 23:32:57.942933  160181 command_runner.go:130] > # log_size_max = -1
	I1212 23:32:57.942939  160181 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 23:32:57.942946  160181 command_runner.go:130] > # log_to_journald = false
	I1212 23:32:57.942952  160181 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 23:32:57.942957  160181 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 23:32:57.942962  160181 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 23:32:57.942970  160181 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 23:32:57.942975  160181 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 23:32:57.942982  160181 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 23:32:57.942987  160181 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 23:32:57.942993  160181 command_runner.go:130] > # read_only = false
	I1212 23:32:57.942999  160181 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 23:32:57.943008  160181 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 23:32:57.943045  160181 command_runner.go:130] > # live configuration reload.
	I1212 23:32:57.943052  160181 command_runner.go:130] > # log_level = "info"
	I1212 23:32:57.943059  160181 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 23:32:57.943067  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:32:57.943074  160181 command_runner.go:130] > # log_filter = ""
	I1212 23:32:57.943086  160181 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 23:32:57.943099  160181 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 23:32:57.943108  160181 command_runner.go:130] > # separated by comma.
	I1212 23:32:57.943114  160181 command_runner.go:130] > # uid_mappings = ""
	I1212 23:32:57.943126  160181 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 23:32:57.943139  160181 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 23:32:57.943149  160181 command_runner.go:130] > # separated by comma.
	I1212 23:32:57.943156  160181 command_runner.go:130] > # gid_mappings = ""
	I1212 23:32:57.943168  160181 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 23:32:57.943179  160181 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 23:32:57.943187  160181 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 23:32:57.943192  160181 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 23:32:57.943198  160181 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 23:32:57.943206  160181 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 23:32:57.943212  160181 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 23:32:57.943219  160181 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 23:32:57.943225  160181 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 23:32:57.943233  160181 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 23:32:57.943239  160181 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 23:32:57.943245  160181 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 23:32:57.943251  160181 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 23:32:57.943260  160181 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 23:32:57.943268  160181 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 23:32:57.943279  160181 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 23:32:57.943291  160181 command_runner.go:130] > drop_infra_ctr = false
	I1212 23:32:57.943300  160181 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 23:32:57.943318  160181 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 23:32:57.943334  160181 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 23:32:57.943344  160181 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 23:32:57.943354  160181 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 23:32:57.943365  160181 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 23:32:57.943375  160181 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 23:32:57.943389  160181 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 23:32:57.943397  160181 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 23:32:57.943410  160181 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 23:32:57.943419  160181 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 23:32:57.943425  160181 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 23:32:57.943432  160181 command_runner.go:130] > # default_runtime = "runc"
	I1212 23:32:57.943437  160181 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 23:32:57.943446  160181 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 23:32:57.943457  160181 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 23:32:57.943464  160181 command_runner.go:130] > # creation as a file is not desired either.
	I1212 23:32:57.943472  160181 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 23:32:57.943480  160181 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 23:32:57.943484  160181 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 23:32:57.943488  160181 command_runner.go:130] > # ]
	I1212 23:32:57.943496  160181 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 23:32:57.943503  160181 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 23:32:57.943512  160181 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 23:32:57.943518  160181 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 23:32:57.943523  160181 command_runner.go:130] > #
	I1212 23:32:57.943528  160181 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 23:32:57.943535  160181 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 23:32:57.943540  160181 command_runner.go:130] > #  runtime_type = "oci"
	I1212 23:32:57.943546  160181 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 23:32:57.943551  160181 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 23:32:57.943557  160181 command_runner.go:130] > #  allowed_annotations = []
	I1212 23:32:57.943561  160181 command_runner.go:130] > # Where:
	I1212 23:32:57.943569  160181 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 23:32:57.943575  160181 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 23:32:57.943582  160181 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 23:32:57.943590  160181 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 23:32:57.943594  160181 command_runner.go:130] > #   in $PATH.
	I1212 23:32:57.943600  160181 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 23:32:57.943605  160181 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 23:32:57.943611  160181 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 23:32:57.943614  160181 command_runner.go:130] > #   state.
	I1212 23:32:57.943620  160181 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 23:32:57.943649  160181 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 23:32:57.943655  160181 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 23:32:57.943663  160181 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 23:32:57.943669  160181 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 23:32:57.943680  160181 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 23:32:57.943690  160181 command_runner.go:130] > #   The currently recognized values are:
	I1212 23:32:57.943701  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 23:32:57.943715  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 23:32:57.943724  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 23:32:57.943735  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 23:32:57.943750  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 23:32:57.943764  160181 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 23:32:57.943777  160181 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 23:32:57.943787  160181 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 23:32:57.943792  160181 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 23:32:57.943799  160181 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 23:32:57.943804  160181 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 23:32:57.943809  160181 command_runner.go:130] > runtime_type = "oci"
	I1212 23:32:57.943813  160181 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 23:32:57.943818  160181 command_runner.go:130] > runtime_config_path = ""
	I1212 23:32:57.943822  160181 command_runner.go:130] > monitor_path = ""
	I1212 23:32:57.943827  160181 command_runner.go:130] > monitor_cgroup = ""
	I1212 23:32:57.943831  160181 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 23:32:57.943840  160181 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 23:32:57.943844  160181 command_runner.go:130] > # running containers
	I1212 23:32:57.943851  160181 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 23:32:57.943857  160181 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 23:32:57.943884  160181 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 23:32:57.943896  160181 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 23:32:57.943904  160181 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 23:32:57.943912  160181 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 23:32:57.943920  160181 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 23:32:57.943931  160181 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 23:32:57.943941  160181 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 23:32:57.943953  160181 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 23:32:57.943964  160181 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 23:32:57.943975  160181 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 23:32:57.943985  160181 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 23:32:57.943992  160181 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 23:32:57.944002  160181 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 23:32:57.944008  160181 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 23:32:57.944019  160181 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 23:32:57.944028  160181 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 23:32:57.944038  160181 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 23:32:57.944046  160181 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 23:32:57.944053  160181 command_runner.go:130] > # Example:
	I1212 23:32:57.944058  160181 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 23:32:57.944065  160181 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 23:32:57.944070  160181 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 23:32:57.944077  160181 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 23:32:57.944081  160181 command_runner.go:130] > # cpuset = 0
	I1212 23:32:57.944087  160181 command_runner.go:130] > # cpushares = "0-1"
	I1212 23:32:57.944091  160181 command_runner.go:130] > # Where:
	I1212 23:32:57.944097  160181 command_runner.go:130] > # The workload name is workload-type.
	I1212 23:32:57.944104  160181 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 23:32:57.944112  160181 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 23:32:57.944118  160181 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 23:32:57.944125  160181 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 23:32:57.944134  160181 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 23:32:57.944138  160181 command_runner.go:130] > # 
	I1212 23:32:57.944147  160181 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 23:32:57.944151  160181 command_runner.go:130] > #
	I1212 23:32:57.944159  160181 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 23:32:57.944165  160181 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 23:32:57.944173  160181 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 23:32:57.944179  160181 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 23:32:57.944187  160181 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 23:32:57.944195  160181 command_runner.go:130] > [crio.image]
	I1212 23:32:57.944226  160181 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 23:32:57.944234  160181 command_runner.go:130] > # default_transport = "docker://"
	I1212 23:32:57.944240  160181 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 23:32:57.944252  160181 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 23:32:57.944262  160181 command_runner.go:130] > # global_auth_file = ""
	I1212 23:32:57.944271  160181 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 23:32:57.944282  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:32:57.944293  160181 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 23:32:57.944303  160181 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 23:32:57.944315  160181 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 23:32:57.944326  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:32:57.944337  160181 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 23:32:57.944348  160181 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 23:32:57.944358  160181 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 23:32:57.944371  160181 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 23:32:57.944384  160181 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 23:32:57.944394  160181 command_runner.go:130] > # pause_command = "/pause"
	I1212 23:32:57.944403  160181 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 23:32:57.944414  160181 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 23:32:57.944421  160181 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 23:32:57.944440  160181 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 23:32:57.944453  160181 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 23:32:57.944463  160181 command_runner.go:130] > # signature_policy = ""
	I1212 23:32:57.944473  160181 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 23:32:57.944485  160181 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 23:32:57.944495  160181 command_runner.go:130] > # changing them here.
	I1212 23:32:57.944502  160181 command_runner.go:130] > # insecure_registries = [
	I1212 23:32:57.944513  160181 command_runner.go:130] > # ]
	I1212 23:32:57.944526  160181 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 23:32:57.944534  160181 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 23:32:57.944539  160181 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 23:32:57.944547  160181 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 23:32:57.944553  160181 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 23:32:57.944562  160181 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 23:32:57.944566  160181 command_runner.go:130] > # CNI plugins.
	I1212 23:32:57.944572  160181 command_runner.go:130] > [crio.network]
	I1212 23:32:57.944578  160181 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 23:32:57.944587  160181 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 23:32:57.944594  160181 command_runner.go:130] > # cni_default_network = ""
	I1212 23:32:57.944600  160181 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 23:32:57.944607  160181 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 23:32:57.944612  160181 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 23:32:57.944618  160181 command_runner.go:130] > # plugin_dirs = [
	I1212 23:32:57.944622  160181 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 23:32:57.944626  160181 command_runner.go:130] > # ]
	I1212 23:32:57.944632  160181 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 23:32:57.944638  160181 command_runner.go:130] > [crio.metrics]
	I1212 23:32:57.944643  160181 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 23:32:57.944647  160181 command_runner.go:130] > enable_metrics = true
	I1212 23:32:57.944654  160181 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 23:32:57.944659  160181 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 23:32:57.944667  160181 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 23:32:57.944675  160181 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 23:32:57.944683  160181 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 23:32:57.944687  160181 command_runner.go:130] > # metrics_collectors = [
	I1212 23:32:57.944694  160181 command_runner.go:130] > # 	"operations",
	I1212 23:32:57.944699  160181 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 23:32:57.944706  160181 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 23:32:57.944710  160181 command_runner.go:130] > # 	"operations_errors",
	I1212 23:32:57.944717  160181 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 23:32:57.944721  160181 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 23:32:57.944726  160181 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 23:32:57.944733  160181 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 23:32:57.944737  160181 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 23:32:57.944744  160181 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 23:32:57.944748  160181 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 23:32:57.944751  160181 command_runner.go:130] > # 	"containers_oom_total",
	I1212 23:32:57.944758  160181 command_runner.go:130] > # 	"containers_oom",
	I1212 23:32:57.944764  160181 command_runner.go:130] > # 	"processes_defunct",
	I1212 23:32:57.944770  160181 command_runner.go:130] > # 	"operations_total",
	I1212 23:32:57.944774  160181 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 23:32:57.944781  160181 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 23:32:57.944785  160181 command_runner.go:130] > # 	"operations_errors_total",
	I1212 23:32:57.944792  160181 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 23:32:57.944797  160181 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 23:32:57.944801  160181 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 23:32:57.944806  160181 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 23:32:57.944812  160181 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 23:32:57.944816  160181 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 23:32:57.944820  160181 command_runner.go:130] > # ]
	I1212 23:32:57.944825  160181 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 23:32:57.944830  160181 command_runner.go:130] > # metrics_port = 9090
	I1212 23:32:57.944835  160181 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 23:32:57.944839  160181 command_runner.go:130] > # metrics_socket = ""
	I1212 23:32:57.944847  160181 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 23:32:57.944853  160181 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 23:32:57.944861  160181 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 23:32:57.944866  160181 command_runner.go:130] > # certificate on any modification event.
	I1212 23:32:57.944871  160181 command_runner.go:130] > # metrics_cert = ""
	I1212 23:32:57.944876  160181 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 23:32:57.944883  160181 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 23:32:57.944892  160181 command_runner.go:130] > # metrics_key = ""
	I1212 23:32:57.944900  160181 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 23:32:57.944904  160181 command_runner.go:130] > [crio.tracing]
	I1212 23:32:57.944912  160181 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 23:32:57.944919  160181 command_runner.go:130] > # enable_tracing = false
	I1212 23:32:57.944924  160181 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 23:32:57.944929  160181 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 23:32:57.944936  160181 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 23:32:57.944941  160181 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 23:32:57.944949  160181 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 23:32:57.944953  160181 command_runner.go:130] > [crio.stats]
	I1212 23:32:57.944959  160181 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 23:32:57.944966  160181 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 23:32:57.944972  160181 command_runner.go:130] > # stats_collection_period = 0
	I1212 23:32:57.945001  160181 command_runner.go:130] ! time="2023-12-12 23:32:57.918883214Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 23:32:57.945014  160181 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 23:32:57.945069  160181 cni.go:84] Creating CNI manager for ""
	I1212 23:32:57.945083  160181 cni.go:136] 3 nodes found, recommending kindnet
	I1212 23:32:57.945091  160181 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:32:57.945109  160181 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-510563 NodeName:multinode-510563-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:32:57.945206  160181 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-510563-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:32:57.945270  160181 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-510563-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:32:57.945327  160181 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:32:57.954579  160181 command_runner.go:130] > kubeadm
	I1212 23:32:57.954597  160181 command_runner.go:130] > kubectl
	I1212 23:32:57.954603  160181 command_runner.go:130] > kubelet
	I1212 23:32:57.954626  160181 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:32:57.954679  160181 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 23:32:57.962800  160181 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1212 23:32:57.978227  160181 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:32:57.993591  160181 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I1212 23:32:57.997340  160181 command_runner.go:130] > 192.168.39.38	control-plane.minikube.internal
	I1212 23:32:57.997605  160181 host.go:66] Checking if "multinode-510563" exists ...
	I1212 23:32:57.997891  160181 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:32:57.998017  160181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:32:57.998055  160181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:32:58.012831  160181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I1212 23:32:58.013315  160181 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:32:58.013769  160181 main.go:141] libmachine: Using API Version  1
	I1212 23:32:58.013791  160181 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:32:58.014073  160181 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:32:58.014229  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:32:58.014353  160181 start.go:304] JoinCluster: &{Name:multinode-510563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.133 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:32:58.014491  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 23:32:58.014506  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:32:58.017815  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:32:58.018383  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:32:58.018424  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:32:58.018632  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:32:58.018800  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:32:58.018958  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:32:58.019086  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:32:58.182073  160181 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rh4ybb.5ontdvlgw4y0wcms --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1212 23:32:58.189990  160181 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 23:32:58.190030  160181 host.go:66] Checking if "multinode-510563" exists ...
	I1212 23:32:58.190315  160181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:32:58.190344  160181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:32:58.204795  160181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45341
	I1212 23:32:58.205205  160181 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:32:58.205690  160181 main.go:141] libmachine: Using API Version  1
	I1212 23:32:58.205710  160181 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:32:58.206053  160181 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:32:58.206298  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:32:58.206542  160181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-510563-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1212 23:32:58.206571  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:32:58.210001  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:32:58.210429  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:32:58.210460  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:32:58.210625  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:32:58.210795  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:32:58.210972  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:32:58.211113  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:32:58.422318  160181 command_runner.go:130] > node/multinode-510563-m02 cordoned
	I1212 23:33:01.481139  160181 command_runner.go:130] > pod "busybox-5bc68d56bd-6hjc6" has DeletionTimestamp older than 1 seconds, skipping
	I1212 23:33:01.481214  160181 command_runner.go:130] > node/multinode-510563-m02 drained
	I1212 23:33:01.483174  160181 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1212 23:33:01.483191  160181 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-5v7sf, kube-system/kube-proxy-msx8s
	I1212 23:33:01.483212  160181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-510563-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.276645743s)
	I1212 23:33:01.483226  160181 node.go:108] successfully drained node "m02"
	I1212 23:33:01.483552  160181 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:33:01.483757  160181 kapi.go:59] client config for multinode-510563: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:33:01.484104  160181 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1212 23:33:01.484153  160181 round_trippers.go:463] DELETE https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:33:01.484161  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:01.484169  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:01.484174  160181 round_trippers.go:473]     Content-Type: application/json
	I1212 23:33:01.484180  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:01.495487  160181 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1212 23:33:01.495502  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:01.495509  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:01 GMT
	I1212 23:33:01.495514  160181 round_trippers.go:580]     Audit-Id: 0e0821c4-6489-481c-b93b-65d78950a577
	I1212 23:33:01.495519  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:01.495524  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:01.495529  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:01.495534  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:01.495542  160181 round_trippers.go:580]     Content-Length: 171
	I1212 23:33:01.495864  160181 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-510563-m02","kind":"nodes","uid":"d2556948-0b22-4680-ae18-714b42dd72a0"}}
	I1212 23:33:01.495901  160181 node.go:124] successfully deleted node "m02"
	I1212 23:33:01.495910  160181 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 23:33:01.495926  160181 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 23:33:01.495943  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rh4ybb.5ontdvlgw4y0wcms --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-510563-m02"
	I1212 23:33:01.551085  160181 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 23:33:01.702252  160181 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1212 23:33:01.702286  160181 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1212 23:33:01.762724  160181 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:33:01.762751  160181 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:33:01.762756  160181 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 23:33:01.912902  160181 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1212 23:33:02.443514  160181 command_runner.go:130] > This node has joined the cluster:
	I1212 23:33:02.443540  160181 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1212 23:33:02.443548  160181 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1212 23:33:02.443556  160181 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1212 23:33:02.447207  160181 command_runner.go:130] ! W1212 23:33:01.538562    2676 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1212 23:33:02.447235  160181 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1212 23:33:02.447247  160181 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1212 23:33:02.447260  160181 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1212 23:33:02.447309  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 23:33:02.721228  160181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=multinode-510563 minikube.k8s.io/updated_at=2023_12_12T23_33_02_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:33:02.831198  160181 command_runner.go:130] > node/multinode-510563-m02 labeled
	I1212 23:33:02.846572  160181 command_runner.go:130] > node/multinode-510563-m03 labeled
	I1212 23:33:02.848455  160181 start.go:306] JoinCluster complete in 4.834096258s
	I1212 23:33:02.848479  160181 cni.go:84] Creating CNI manager for ""
	I1212 23:33:02.848485  160181 cni.go:136] 3 nodes found, recommending kindnet
	I1212 23:33:02.848544  160181 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 23:33:02.858104  160181 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 23:33:02.858128  160181 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 23:33:02.858135  160181 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 23:33:02.858142  160181 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:33:02.858149  160181 command_runner.go:130] > Access: 2023-12-12 23:30:35.501189212 +0000
	I1212 23:33:02.858158  160181 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 23:33:02.858166  160181 command_runner.go:130] > Change: 2023-12-12 23:30:33.624189212 +0000
	I1212 23:33:02.858173  160181 command_runner.go:130] >  Birth: -
	I1212 23:33:02.858346  160181 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 23:33:02.858365  160181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 23:33:02.881506  160181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 23:33:03.243088  160181 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 23:33:03.249744  160181 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 23:33:03.256611  160181 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 23:33:03.274578  160181 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 23:33:03.278091  160181 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:33:03.278438  160181 kapi.go:59] client config for multinode-510563: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:33:03.278833  160181 round_trippers.go:463] GET https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:33:03.278852  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.278862  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.278873  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.281241  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:33:03.281260  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.281271  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.281278  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.281287  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.281292  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.281300  160181 round_trippers.go:580]     Content-Length: 291
	I1212 23:33:03.281305  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.281312  160181 round_trippers.go:580]     Audit-Id: 48a19bb6-811c-4c0f-9bb1-5d19cc454d62
	I1212 23:33:03.281332  160181 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b98a4ef4-74a2-4d35-a29b-c065c5f3121c","resourceVersion":"907","creationTimestamp":"2023-12-12T23:20:36Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 23:33:03.281417  160181 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-510563" context rescaled to 1 replicas
	I1212 23:33:03.281451  160181 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 23:33:03.283242  160181 out.go:177] * Verifying Kubernetes components...
	I1212 23:33:03.284752  160181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:33:03.299589  160181 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:33:03.299904  160181 kapi.go:59] client config for multinode-510563: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:33:03.300211  160181 node_ready.go:35] waiting up to 6m0s for node "multinode-510563-m02" to be "Ready" ...
	I1212 23:33:03.300297  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:33:03.300310  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.300321  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.300333  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.304004  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:33:03.304024  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.304033  160181 round_trippers.go:580]     Audit-Id: 071e9b6a-0acb-45e0-9bbb-f952a73180ad
	I1212 23:33:03.304042  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.304050  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.304058  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.304074  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.304082  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.304736  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"624fd936-a780-4801-8d74-d9563b64b861","resourceVersion":"1058","creationTimestamp":"2023-12-12T23:33:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_33_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1212 23:33:03.305035  160181 node_ready.go:49] node "multinode-510563-m02" has status "Ready":"True"
	I1212 23:33:03.305054  160181 node_ready.go:38] duration metric: took 4.820529ms waiting for node "multinode-510563-m02" to be "Ready" ...
	I1212 23:33:03.305065  160181 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:33:03.305130  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:33:03.305142  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.305152  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.305164  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.312832  160181 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1212 23:33:03.312859  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.312872  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.312880  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.312887  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.312894  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.312904  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.312922  160181 round_trippers.go:580]     Audit-Id: 40c98cdc-f15e-47d7-9dd6-76859bdc32c8
	I1212 23:33:03.313787  160181 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1068"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"894","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82206 chars]
	I1212 23:33:03.317315  160181 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:03.317419  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:33:03.317432  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.317444  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.317452  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.320829  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:33:03.320850  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.320858  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.320866  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.320874  160181 round_trippers.go:580]     Audit-Id: eb4f7705-80e3-4ed4-a8f4-754d2fa47058
	I1212 23:33:03.320882  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.320889  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.320894  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.321120  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"894","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1212 23:33:03.321621  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:33:03.321640  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.321651  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.321659  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.323752  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:33:03.323773  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.323784  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.323792  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.323800  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.323808  160181 round_trippers.go:580]     Audit-Id: 2e629a1b-580f-4617-9e2a-751654771c86
	I1212 23:33:03.323815  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.323822  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.323986  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 23:33:03.324384  160181 pod_ready.go:92] pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace has status "Ready":"True"
	I1212 23:33:03.324403  160181 pod_ready.go:81] duration metric: took 7.059989ms waiting for pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:03.324416  160181 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:03.324498  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:33:03.324510  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.324520  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.324532  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.327063  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:33:03.327077  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.327083  160181 round_trippers.go:580]     Audit-Id: 707ff0bc-113f-494c-9300-b56b8ada6c95
	I1212 23:33:03.327088  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.327093  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.327098  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.327106  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.327114  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.327259  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"917","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1212 23:33:03.327629  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:33:03.327641  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.327648  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.327653  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.330446  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:33:03.330466  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.330477  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.330485  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.330490  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.330495  160181 round_trippers.go:580]     Audit-Id: fcfdd901-78ef-4cae-bcdd-4fff078bd1f1
	I1212 23:33:03.330500  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.330505  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.330680  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 23:33:03.331016  160181 pod_ready.go:92] pod "etcd-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:33:03.331035  160181 pod_ready.go:81] duration metric: took 6.607859ms waiting for pod "etcd-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:03.331057  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:03.331123  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-510563
	I1212 23:33:03.331136  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.331146  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.331156  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.333308  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:33:03.333322  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.333328  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.333334  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.333339  160181 round_trippers.go:580]     Audit-Id: d82c4ecd-37e7-4885-9cb3-be627a9b3493
	I1212 23:33:03.333345  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.333354  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.333361  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.333527  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-510563","namespace":"kube-system","uid":"e8a8ed00-d13d-44f0-b7d6-b42bf1342d95","resourceVersion":"900","creationTimestamp":"2023-12-12T23:20:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.38:8443","kubernetes.io/config.hash":"4b970951c1b4ca2bc525afa7c2eb2fef","kubernetes.io/config.mirror":"4b970951c1b4ca2bc525afa7c2eb2fef","kubernetes.io/config.seen":"2023-12-12T23:20:27.932579600Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1212 23:33:03.333878  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:33:03.333895  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.333902  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.333907  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.335576  160181 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:33:03.335594  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.335604  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.335611  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.335618  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.335625  160181 round_trippers.go:580]     Audit-Id: 9229f634-b673-4f19-8961-746dcec346d6
	I1212 23:33:03.335634  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.335643  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.335896  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 23:33:03.336197  160181 pod_ready.go:92] pod "kube-apiserver-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:33:03.336212  160181 pod_ready.go:81] duration metric: took 5.144734ms waiting for pod "kube-apiserver-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:03.336220  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:03.336271  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-510563
	I1212 23:33:03.336282  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.336292  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.336309  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.338994  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:33:03.339012  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.339022  160181 round_trippers.go:580]     Audit-Id: 72d4749d-d702-415d-a1be-4211406d325c
	I1212 23:33:03.339031  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.339038  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.339045  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.339067  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.339084  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.339267  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-510563","namespace":"kube-system","uid":"efdc7f68-25d6-4f6a-ab8f-1dec43407375","resourceVersion":"887","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"50f588add554ab298cca0792048dbecc","kubernetes.io/config.mirror":"50f588add554ab298cca0792048dbecc","kubernetes.io/config.seen":"2023-12-12T23:20:36.354954910Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1212 23:33:03.339712  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:33:03.339725  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.339732  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.339737  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.341846  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:33:03.341863  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.341872  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.341881  160181 round_trippers.go:580]     Audit-Id: b6a52b73-8376-4adb-aae9-e89a82c3b96f
	I1212 23:33:03.341897  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.341906  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.341912  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.341917  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.342090  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 23:33:03.342356  160181 pod_ready.go:92] pod "kube-controller-manager-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:33:03.342371  160181 pod_ready.go:81] duration metric: took 6.145962ms waiting for pod "kube-controller-manager-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:03.342379  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fbk65" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:03.500770  160181 request.go:629] Waited for 158.320997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fbk65
	I1212 23:33:03.500848  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fbk65
	I1212 23:33:03.500856  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.500866  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.500888  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.504220  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:33:03.504243  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.504253  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.504262  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.504270  160181 round_trippers.go:580]     Audit-Id: cb436372-9f63-41a1-800e-e22f2791e959
	I1212 23:33:03.504279  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.504287  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.504295  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.504493  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fbk65","generateName":"kube-proxy-","namespace":"kube-system","uid":"478c2dce-ac51-47ac-9d34-20dc7c331056","resourceVersion":"742","creationTimestamp":"2023-12-12T23:22:27Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:22:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1212 23:33:03.701358  160181 request.go:629] Waited for 196.36904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m03
	I1212 23:33:03.701441  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m03
	I1212 23:33:03.701449  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.701461  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.701472  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.704330  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:33:03.704352  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.704363  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.704372  160181 round_trippers.go:580]     Audit-Id: 2278be0e-df79-4d44-aeba-d46ebe0bfe3b
	I1212 23:33:03.704380  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.704389  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.704398  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.704406  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.704554  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m03","uid":"6de0e5a4-53e7-4397-9be8-0053fa116498","resourceVersion":"1059","creationTimestamp":"2023-12-12T23:23:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_33_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:23:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3966 chars]
	I1212 23:33:03.704877  160181 pod_ready.go:92] pod "kube-proxy-fbk65" in "kube-system" namespace has status "Ready":"True"
	I1212 23:33:03.704902  160181 pod_ready.go:81] duration metric: took 362.516632ms waiting for pod "kube-proxy-fbk65" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:03.704915  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hspw8" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:03.901353  160181 request.go:629] Waited for 196.375773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hspw8
	I1212 23:33:03.901406  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hspw8
	I1212 23:33:03.901410  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:03.901418  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:03.901425  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:03.910087  160181 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:33:03.910107  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:03.910115  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:03 GMT
	I1212 23:33:03.910120  160181 round_trippers.go:580]     Audit-Id: d442eb85-706b-4924-bffb-cf55e29ef5e8
	I1212 23:33:03.910125  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:03.910130  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:03.910138  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:03.910146  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:03.910443  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hspw8","generateName":"kube-proxy-","namespace":"kube-system","uid":"a2255be6-8705-40cd-8f35-a3e82906190c","resourceVersion":"855","creationTimestamp":"2023-12-12T23:20:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1212 23:33:04.101345  160181 request.go:629] Waited for 190.358313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:33:04.101415  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:33:04.101423  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:04.101434  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:04.101448  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:04.104290  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:33:04.104315  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:04.104325  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:04.104334  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:04.104342  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:04.104348  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:04 GMT
	I1212 23:33:04.104356  160181 round_trippers.go:580]     Audit-Id: 8e87c560-91ba-4def-9b00-a5eb4f4af47d
	I1212 23:33:04.104363  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:04.104539  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 23:33:04.104986  160181 pod_ready.go:92] pod "kube-proxy-hspw8" in "kube-system" namespace has status "Ready":"True"
	I1212 23:33:04.105009  160181 pod_ready.go:81] duration metric: took 400.08589ms waiting for pod "kube-proxy-hspw8" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:04.105028  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-msx8s" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:04.300387  160181 request.go:629] Waited for 195.288771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msx8s
	I1212 23:33:04.300484  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msx8s
	I1212 23:33:04.300491  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:04.300546  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:04.300559  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:04.303237  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:33:04.303251  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:04.303256  160181 round_trippers.go:580]     Audit-Id: 44c4e41c-5b87-45ad-a6e5-58144c521c32
	I1212 23:33:04.303262  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:04.303267  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:04.303272  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:04.303278  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:04.303287  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:04 GMT
	I1212 23:33:04.303683  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-msx8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"f41b9a6d-8132-45a6-9847-5a762664b008","resourceVersion":"1079","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1212 23:33:04.500464  160181 request.go:629] Waited for 196.354498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:33:04.500583  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:33:04.500608  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:04.500622  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:04.500641  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:04.503532  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:33:04.503557  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:04.503567  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:04 GMT
	I1212 23:33:04.503575  160181 round_trippers.go:580]     Audit-Id: c1336d21-6700-42c8-b605-8a406c774e37
	I1212 23:33:04.503583  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:04.503591  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:04.503603  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:04.503611  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:04.503819  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"624fd936-a780-4801-8d74-d9563b64b861","resourceVersion":"1058","creationTimestamp":"2023-12-12T23:33:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_33_02_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1212 23:33:04.504108  160181 pod_ready.go:92] pod "kube-proxy-msx8s" in "kube-system" namespace has status "Ready":"True"
	I1212 23:33:04.504128  160181 pod_ready.go:81] duration metric: took 399.091414ms waiting for pod "kube-proxy-msx8s" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:04.504140  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:04.700539  160181 request.go:629] Waited for 196.335177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-510563
	I1212 23:33:04.700618  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-510563
	I1212 23:33:04.700640  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:04.700655  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:04.700668  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:04.704878  160181 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:33:04.704900  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:04.704910  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:04 GMT
	I1212 23:33:04.704919  160181 round_trippers.go:580]     Audit-Id: 2f033882-2c14-4cfc-ae69-0a789a065f06
	I1212 23:33:04.704927  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:04.704934  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:04.704942  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:04.704949  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:04.705184  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-510563","namespace":"kube-system","uid":"044da73c-9466-4a43-b283-5f4b9cc04df9","resourceVersion":"895","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fdb335c77d5fb1581ea23fa0adf419e9","kubernetes.io/config.mirror":"fdb335c77d5fb1581ea23fa0adf419e9","kubernetes.io/config.seen":"2023-12-12T23:20:36.354955844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1212 23:33:04.900882  160181 request.go:629] Waited for 195.299324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:33:04.900964  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:33:04.900976  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:04.900992  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:04.901022  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:04.903863  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:33:04.903885  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:04.903895  160181 round_trippers.go:580]     Audit-Id: 90615d2c-050a-415d-8d08-5b58d97c03dc
	I1212 23:33:04.903904  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:04.903911  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:04.903919  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:04.903928  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:04.903939  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:04 GMT
	I1212 23:33:04.904270  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 23:33:04.904623  160181 pod_ready.go:92] pod "kube-scheduler-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:33:04.904640  160181 pod_ready.go:81] duration metric: took 400.491429ms waiting for pod "kube-scheduler-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:33:04.904654  160181 pod_ready.go:38] duration metric: took 1.599573701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:33:04.904675  160181 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:33:04.904735  160181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:33:04.919099  160181 system_svc.go:56] duration metric: took 14.416475ms WaitForService to wait for kubelet.
	I1212 23:33:04.919130  160181 kubeadm.go:581] duration metric: took 1.637655195s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:33:04.919151  160181 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:33:05.101373  160181 request.go:629] Waited for 182.12544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes
	I1212 23:33:05.101431  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes
	I1212 23:33:05.101437  160181 round_trippers.go:469] Request Headers:
	I1212 23:33:05.101445  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:33:05.101453  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:33:05.104170  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:33:05.104195  160181 round_trippers.go:577] Response Headers:
	I1212 23:33:05.104205  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:33:05.104212  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:33:05 GMT
	I1212 23:33:05.104221  160181 round_trippers.go:580]     Audit-Id: efd00975-c7e7-47bd-881e-1fe52d0d1f87
	I1212 23:33:05.104229  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:33:05.104241  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:33:05.104251  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:33:05.104740  160181 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1083"},"items":[{"metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16210 chars]
	I1212 23:33:05.105380  160181 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:33:05.105400  160181 node_conditions.go:123] node cpu capacity is 2
	I1212 23:33:05.105412  160181 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:33:05.105417  160181 node_conditions.go:123] node cpu capacity is 2
	I1212 23:33:05.105424  160181 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:33:05.105428  160181 node_conditions.go:123] node cpu capacity is 2
	I1212 23:33:05.105434  160181 node_conditions.go:105] duration metric: took 186.27804ms to run NodePressure ...
	I1212 23:33:05.105443  160181 start.go:228] waiting for startup goroutines ...
	I1212 23:33:05.105463  160181 start.go:242] writing updated cluster config ...
	I1212 23:33:05.105866  160181 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:33:05.105974  160181 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/config.json ...
	I1212 23:33:05.108178  160181 out.go:177] * Starting worker node multinode-510563-m03 in cluster multinode-510563
	I1212 23:33:05.109985  160181 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:33:05.110012  160181 cache.go:56] Caching tarball of preloaded images
	I1212 23:33:05.110118  160181 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:33:05.110132  160181 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 23:33:05.110236  160181 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/config.json ...
	I1212 23:33:05.110415  160181 start.go:365] acquiring machines lock for multinode-510563-m03: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:33:05.110463  160181 start.go:369] acquired machines lock for "multinode-510563-m03" in 28.269µs
	I1212 23:33:05.110483  160181 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:33:05.110492  160181 fix.go:54] fixHost starting: m03
	I1212 23:33:05.110731  160181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:33:05.110755  160181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:33:05.125024  160181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I1212 23:33:05.125463  160181 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:33:05.125866  160181 main.go:141] libmachine: Using API Version  1
	I1212 23:33:05.125884  160181 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:33:05.126225  160181 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:33:05.126410  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .DriverName
	I1212 23:33:05.126541  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetState
	I1212 23:33:05.128239  160181 fix.go:102] recreateIfNeeded on multinode-510563-m03: state=Running err=<nil>
	W1212 23:33:05.128258  160181 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:33:05.129980  160181 out.go:177] * Updating the running kvm2 "multinode-510563-m03" VM ...
	I1212 23:33:05.131494  160181 machine.go:88] provisioning docker machine ...
	I1212 23:33:05.131510  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .DriverName
	I1212 23:33:05.131717  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetMachineName
	I1212 23:33:05.131875  160181 buildroot.go:166] provisioning hostname "multinode-510563-m03"
	I1212 23:33:05.131889  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetMachineName
	I1212 23:33:05.132048  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHHostname
	I1212 23:33:05.134127  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:33:05.134518  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:03:0f", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:23:04 +0000 UTC Type:0 Mac:52:54:00:03:03:0f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-510563-m03 Clientid:01:52:54:00:03:03:0f}
	I1212 23:33:05.134551  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:33:05.134704  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHPort
	I1212 23:33:05.134874  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHKeyPath
	I1212 23:33:05.135047  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHKeyPath
	I1212 23:33:05.135194  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHUsername
	I1212 23:33:05.135335  160181 main.go:141] libmachine: Using SSH client type: native
	I1212 23:33:05.135775  160181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1212 23:33:05.135795  160181 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-510563-m03 && echo "multinode-510563-m03" | sudo tee /etc/hostname
	I1212 23:33:05.270400  160181 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-510563-m03
	
	I1212 23:33:05.270426  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHHostname
	I1212 23:33:05.273246  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:33:05.273609  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:03:0f", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:23:04 +0000 UTC Type:0 Mac:52:54:00:03:03:0f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-510563-m03 Clientid:01:52:54:00:03:03:0f}
	I1212 23:33:05.273646  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:33:05.273803  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHPort
	I1212 23:33:05.274007  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHKeyPath
	I1212 23:33:05.274201  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHKeyPath
	I1212 23:33:05.274379  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHUsername
	I1212 23:33:05.274545  160181 main.go:141] libmachine: Using SSH client type: native
	I1212 23:33:05.274857  160181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1212 23:33:05.274874  160181 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-510563-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-510563-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-510563-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:33:05.393691  160181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:33:05.393724  160181 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1212 23:33:05.393738  160181 buildroot.go:174] setting up certificates
	I1212 23:33:05.393752  160181 provision.go:83] configureAuth start
	I1212 23:33:05.393764  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetMachineName
	I1212 23:33:05.394097  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetIP
	I1212 23:33:05.396932  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:33:05.397342  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:03:0f", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:23:04 +0000 UTC Type:0 Mac:52:54:00:03:03:0f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-510563-m03 Clientid:01:52:54:00:03:03:0f}
	I1212 23:33:05.397373  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:33:05.397530  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHHostname
	I1212 23:33:05.399788  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:33:05.400153  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:03:0f", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:23:04 +0000 UTC Type:0 Mac:52:54:00:03:03:0f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-510563-m03 Clientid:01:52:54:00:03:03:0f}
	I1212 23:33:05.400181  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:33:05.400344  160181 provision.go:138] copyHostCerts
	I1212 23:33:05.400383  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:33:05.400415  160181 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1212 23:33:05.400423  160181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:33:05.400521  160181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1212 23:33:05.400592  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:33:05.400627  160181 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1212 23:33:05.400634  160181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:33:05.400658  160181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1212 23:33:05.400700  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:33:05.400715  160181 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1212 23:33:05.400721  160181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:33:05.400744  160181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1212 23:33:05.400802  160181 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.multinode-510563-m03 san=[192.168.39.133 192.168.39.133 localhost 127.0.0.1 minikube multinode-510563-m03]
	I1212 23:33:05.600992  160181 provision.go:172] copyRemoteCerts
	I1212 23:33:05.601048  160181 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:33:05.601069  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHHostname
	I1212 23:33:05.603825  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:33:05.604191  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:03:0f", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:23:04 +0000 UTC Type:0 Mac:52:54:00:03:03:0f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-510563-m03 Clientid:01:52:54:00:03:03:0f}
	I1212 23:33:05.604224  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:33:05.604427  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHPort
	I1212 23:33:05.604633  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHKeyPath
	I1212 23:33:05.604785  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHUsername
	I1212 23:33:05.604965  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m03/id_rsa Username:docker}
	I1212 23:33:05.694122  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 23:33:05.694194  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 23:33:05.718942  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 23:33:05.719023  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:33:05.743135  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 23:33:05.743220  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:33:05.767157  160181 provision.go:86] duration metric: configureAuth took 373.392856ms
	I1212 23:33:05.767184  160181 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:33:05.767377  160181 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:33:05.767452  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHHostname
	I1212 23:33:05.770441  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:33:05.770803  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:03:0f", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:23:04 +0000 UTC Type:0 Mac:52:54:00:03:03:0f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-510563-m03 Clientid:01:52:54:00:03:03:0f}
	I1212 23:33:05.770838  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:33:05.771008  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHPort
	I1212 23:33:05.771223  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHKeyPath
	I1212 23:33:05.771356  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHKeyPath
	I1212 23:33:05.771518  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHUsername
	I1212 23:33:05.771746  160181 main.go:141] libmachine: Using SSH client type: native
	I1212 23:33:05.772041  160181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1212 23:33:05.772057  160181 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:34:36.331223  160181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:34:36.331254  160181 machine.go:91] provisioned docker machine in 1m31.199747873s
	I1212 23:34:36.331265  160181 start.go:300] post-start starting for "multinode-510563-m03" (driver="kvm2")
	I1212 23:34:36.331275  160181 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:34:36.331291  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .DriverName
	I1212 23:34:36.331672  160181 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:34:36.331713  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHHostname
	I1212 23:34:36.334685  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:34:36.335078  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:03:0f", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:23:04 +0000 UTC Type:0 Mac:52:54:00:03:03:0f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-510563-m03 Clientid:01:52:54:00:03:03:0f}
	I1212 23:34:36.335115  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:34:36.335253  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHPort
	I1212 23:34:36.335446  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHKeyPath
	I1212 23:34:36.335622  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHUsername
	I1212 23:34:36.335779  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m03/id_rsa Username:docker}
	I1212 23:34:36.430661  160181 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:34:36.435381  160181 command_runner.go:130] > NAME=Buildroot
	I1212 23:34:36.435400  160181 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 23:34:36.435405  160181 command_runner.go:130] > ID=buildroot
	I1212 23:34:36.435409  160181 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 23:34:36.435416  160181 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 23:34:36.435451  160181 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:34:36.435467  160181 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1212 23:34:36.435551  160181 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1212 23:34:36.435645  160181 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1212 23:34:36.435656  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> /etc/ssl/certs/1435412.pem
	I1212 23:34:36.435729  160181 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:34:36.443774  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:34:36.467100  160181 start.go:303] post-start completed in 135.819198ms
	I1212 23:34:36.467122  160181 fix.go:56] fixHost completed within 1m31.356629806s
	I1212 23:34:36.467141  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHHostname
	I1212 23:34:36.469975  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:34:36.470348  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:03:0f", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:23:04 +0000 UTC Type:0 Mac:52:54:00:03:03:0f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-510563-m03 Clientid:01:52:54:00:03:03:0f}
	I1212 23:34:36.470384  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:34:36.470505  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHPort
	I1212 23:34:36.470717  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHKeyPath
	I1212 23:34:36.470867  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHKeyPath
	I1212 23:34:36.471010  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHUsername
	I1212 23:34:36.471187  160181 main.go:141] libmachine: Using SSH client type: native
	I1212 23:34:36.471497  160181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1212 23:34:36.471509  160181 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:34:36.589653  160181 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702424076.581629410
	
	I1212 23:34:36.589680  160181 fix.go:206] guest clock: 1702424076.581629410
	I1212 23:34:36.589689  160181 fix.go:219] Guest: 2023-12-12 23:34:36.58162941 +0000 UTC Remote: 2023-12-12 23:34:36.467125251 +0000 UTC m=+551.669855764 (delta=114.504159ms)
	I1212 23:34:36.589709  160181 fix.go:190] guest clock delta is within tolerance: 114.504159ms
	I1212 23:34:36.589715  160181 start.go:83] releasing machines lock for "multinode-510563-m03", held for 1m31.47924035s
	I1212 23:34:36.589751  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .DriverName
	I1212 23:34:36.590042  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetIP
	I1212 23:34:36.592787  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:34:36.593227  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:03:0f", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:23:04 +0000 UTC Type:0 Mac:52:54:00:03:03:0f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-510563-m03 Clientid:01:52:54:00:03:03:0f}
	I1212 23:34:36.593257  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:34:36.595236  160181 out.go:177] * Found network options:
	I1212 23:34:36.596702  160181 out.go:177]   - NO_PROXY=192.168.39.38,192.168.39.109
	W1212 23:34:36.598137  160181 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 23:34:36.598169  160181 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:34:36.598188  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .DriverName
	I1212 23:34:36.598911  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .DriverName
	I1212 23:34:36.599144  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .DriverName
	I1212 23:34:36.599250  160181 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:34:36.599288  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHHostname
	W1212 23:34:36.599349  160181 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 23:34:36.599373  160181 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 23:34:36.599444  160181 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:34:36.599464  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHHostname
	I1212 23:34:36.602153  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:34:36.602184  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:34:36.602640  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:03:0f", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:23:04 +0000 UTC Type:0 Mac:52:54:00:03:03:0f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-510563-m03 Clientid:01:52:54:00:03:03:0f}
	I1212 23:34:36.602671  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:34:36.602717  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:03:0f", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:23:04 +0000 UTC Type:0 Mac:52:54:00:03:03:0f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-510563-m03 Clientid:01:52:54:00:03:03:0f}
	I1212 23:34:36.602745  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:34:36.602840  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHPort
	I1212 23:34:36.603022  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHKeyPath
	I1212 23:34:36.603041  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHPort
	I1212 23:34:36.603203  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHUsername
	I1212 23:34:36.603221  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHKeyPath
	I1212 23:34:36.603338  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m03/id_rsa Username:docker}
	I1212 23:34:36.603424  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetSSHUsername
	I1212 23:34:36.603563  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m03/id_rsa Username:docker}
	I1212 23:34:36.835283  160181 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 23:34:36.835299  160181 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 23:34:36.841091  160181 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 23:34:36.841132  160181 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:34:36.841184  160181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:34:36.850133  160181 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 23:34:36.850154  160181 start.go:475] detecting cgroup driver to use...
	I1212 23:34:36.850225  160181 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:34:36.864115  160181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:34:36.877023  160181 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:34:36.877071  160181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:34:36.891651  160181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:34:36.904117  160181 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:34:37.028718  160181 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:34:37.146748  160181 docker.go:219] disabling docker service ...
	I1212 23:34:37.146826  160181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:34:37.162817  160181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:34:37.176357  160181 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:34:37.291739  160181 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:34:37.477934  160181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:34:37.497133  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:34:37.514764  160181 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 23:34:37.514843  160181 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:34:37.514896  160181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:34:37.523961  160181 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:34:37.524023  160181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:34:37.533471  160181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:34:37.543738  160181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:34:37.553169  160181 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:34:37.565673  160181 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:34:37.575172  160181 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 23:34:37.575247  160181 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:34:37.584738  160181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:34:37.721674  160181 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:34:40.611023  160181 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.889308433s)
	I1212 23:34:40.611054  160181 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:34:40.611100  160181 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:34:40.616812  160181 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 23:34:40.616829  160181 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 23:34:40.616836  160181 command_runner.go:130] > Device: 16h/22d	Inode: 1234        Links: 1
	I1212 23:34:40.616843  160181 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:34:40.616848  160181 command_runner.go:130] > Access: 2023-12-12 23:34:40.517086236 +0000
	I1212 23:34:40.616856  160181 command_runner.go:130] > Modify: 2023-12-12 23:34:40.517086236 +0000
	I1212 23:34:40.616862  160181 command_runner.go:130] > Change: 2023-12-12 23:34:40.517086236 +0000
	I1212 23:34:40.616866  160181 command_runner.go:130] >  Birth: -
	I1212 23:34:40.617031  160181 start.go:543] Will wait 60s for crictl version
	I1212 23:34:40.617071  160181 ssh_runner.go:195] Run: which crictl
	I1212 23:34:40.621006  160181 command_runner.go:130] > /usr/bin/crictl
	I1212 23:34:40.621056  160181 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:34:40.662610  160181 command_runner.go:130] > Version:  0.1.0
	I1212 23:34:40.662633  160181 command_runner.go:130] > RuntimeName:  cri-o
	I1212 23:34:40.662640  160181 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 23:34:40.662648  160181 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 23:34:40.662722  160181 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:34:40.662789  160181 ssh_runner.go:195] Run: crio --version
	I1212 23:34:40.708151  160181 command_runner.go:130] > crio version 1.24.1
	I1212 23:34:40.708175  160181 command_runner.go:130] > Version:          1.24.1
	I1212 23:34:40.708185  160181 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 23:34:40.708192  160181 command_runner.go:130] > GitTreeState:     dirty
	I1212 23:34:40.708206  160181 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 23:34:40.708213  160181 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 23:34:40.708218  160181 command_runner.go:130] > Compiler:         gc
	I1212 23:34:40.708222  160181 command_runner.go:130] > Platform:         linux/amd64
	I1212 23:34:40.708228  160181 command_runner.go:130] > Linkmode:         dynamic
	I1212 23:34:40.708238  160181 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 23:34:40.708245  160181 command_runner.go:130] > SeccompEnabled:   true
	I1212 23:34:40.708249  160181 command_runner.go:130] > AppArmorEnabled:  false
	I1212 23:34:40.709756  160181 ssh_runner.go:195] Run: crio --version
	I1212 23:34:40.759825  160181 command_runner.go:130] > crio version 1.24.1
	I1212 23:34:40.759848  160181 command_runner.go:130] > Version:          1.24.1
	I1212 23:34:40.759858  160181 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 23:34:40.759864  160181 command_runner.go:130] > GitTreeState:     dirty
	I1212 23:34:40.759873  160181 command_runner.go:130] > BuildDate:        2023-12-08T06:18:18Z
	I1212 23:34:40.759880  160181 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 23:34:40.759885  160181 command_runner.go:130] > Compiler:         gc
	I1212 23:34:40.759892  160181 command_runner.go:130] > Platform:         linux/amd64
	I1212 23:34:40.759899  160181 command_runner.go:130] > Linkmode:         dynamic
	I1212 23:34:40.759910  160181 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 23:34:40.759926  160181 command_runner.go:130] > SeccompEnabled:   true
	I1212 23:34:40.759935  160181 command_runner.go:130] > AppArmorEnabled:  false
	I1212 23:34:40.761959  160181 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:34:40.763435  160181 out.go:177]   - env NO_PROXY=192.168.39.38
	I1212 23:34:40.764893  160181 out.go:177]   - env NO_PROXY=192.168.39.38,192.168.39.109
	I1212 23:34:40.766228  160181 main.go:141] libmachine: (multinode-510563-m03) Calling .GetIP
	I1212 23:34:40.769092  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:34:40.769451  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:03:0f", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:23:04 +0000 UTC Type:0 Mac:52:54:00:03:03:0f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-510563-m03 Clientid:01:52:54:00:03:03:0f}
	I1212 23:34:40.769488  160181 main.go:141] libmachine: (multinode-510563-m03) DBG | domain multinode-510563-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:03:03:0f in network mk-multinode-510563
	I1212 23:34:40.769711  160181 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 23:34:40.774066  160181 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1212 23:34:40.774114  160181 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563 for IP: 192.168.39.133
	I1212 23:34:40.774131  160181 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:34:40.774287  160181 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1212 23:34:40.774334  160181 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1212 23:34:40.774350  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 23:34:40.774369  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 23:34:40.774385  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 23:34:40.774401  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 23:34:40.774467  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1212 23:34:40.774506  160181 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1212 23:34:40.774522  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:34:40.774558  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:34:40.774594  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:34:40.774627  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1212 23:34:40.774681  160181 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:34:40.774717  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> /usr/share/ca-certificates/1435412.pem
	I1212 23:34:40.774735  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:34:40.774752  160181 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem -> /usr/share/ca-certificates/143541.pem
	I1212 23:34:40.775195  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:34:40.798631  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:34:40.820914  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:34:40.843641  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:34:40.867291  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1212 23:34:40.891333  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:34:40.914613  160181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1212 23:34:40.939184  160181 ssh_runner.go:195] Run: openssl version
	I1212 23:34:40.944789  160181 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 23:34:40.945009  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1212 23:34:40.955374  160181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1212 23:34:40.959681  160181 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1212 23:34:40.959868  160181 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1212 23:34:40.959925  160181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1212 23:34:40.965337  160181 command_runner.go:130] > 3ec20f2e
	I1212 23:34:40.965404  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:34:40.974523  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:34:40.984419  160181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:34:40.988831  160181 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:34:40.989048  160181 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:34:40.989109  160181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:34:40.994723  160181 command_runner.go:130] > b5213941
	I1212 23:34:40.994791  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:34:41.003676  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1212 23:34:41.013880  160181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1212 23:34:41.018455  160181 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1212 23:34:41.018484  160181 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1212 23:34:41.018518  160181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1212 23:34:41.023707  160181 command_runner.go:130] > 51391683
	I1212 23:34:41.023982  160181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1212 23:34:41.033122  160181 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:34:41.037221  160181 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:34:41.037254  160181 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:34:41.037338  160181 ssh_runner.go:195] Run: crio config
	I1212 23:34:41.110387  160181 command_runner.go:130] ! time="2023-12-12 23:34:41.102525044Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 23:34:41.110446  160181 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 23:34:41.123271  160181 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 23:34:41.123296  160181 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 23:34:41.123303  160181 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 23:34:41.123309  160181 command_runner.go:130] > #
	I1212 23:34:41.123319  160181 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 23:34:41.123329  160181 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 23:34:41.123339  160181 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 23:34:41.123353  160181 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 23:34:41.123363  160181 command_runner.go:130] > # reload'.
	I1212 23:34:41.123374  160181 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 23:34:41.123386  160181 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 23:34:41.123399  160181 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 23:34:41.123411  160181 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 23:34:41.123420  160181 command_runner.go:130] > [crio]
	I1212 23:34:41.123434  160181 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 23:34:41.123446  160181 command_runner.go:130] > # containers images, in this directory.
	I1212 23:34:41.123463  160181 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 23:34:41.123481  160181 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 23:34:41.123492  160181 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 23:34:41.123504  160181 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 23:34:41.123512  160181 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 23:34:41.123520  160181 command_runner.go:130] > storage_driver = "overlay"
	I1212 23:34:41.123529  160181 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 23:34:41.123543  160181 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 23:34:41.123551  160181 command_runner.go:130] > storage_option = [
	I1212 23:34:41.123562  160181 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 23:34:41.123570  160181 command_runner.go:130] > ]
	I1212 23:34:41.123579  160181 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 23:34:41.123593  160181 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 23:34:41.123606  160181 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 23:34:41.123619  160181 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 23:34:41.123633  160181 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 23:34:41.123644  160181 command_runner.go:130] > # always happen on a node reboot
	I1212 23:34:41.123657  160181 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 23:34:41.123670  160181 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 23:34:41.123682  160181 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 23:34:41.123696  160181 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 23:34:41.123708  160181 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 23:34:41.123724  160181 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 23:34:41.123741  160181 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 23:34:41.123751  160181 command_runner.go:130] > # internal_wipe = true
	I1212 23:34:41.123762  160181 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 23:34:41.123773  160181 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 23:34:41.123784  160181 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 23:34:41.123796  160181 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 23:34:41.123810  160181 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 23:34:41.123820  160181 command_runner.go:130] > [crio.api]
	I1212 23:34:41.123829  160181 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 23:34:41.123840  160181 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 23:34:41.123849  160181 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 23:34:41.123859  160181 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 23:34:41.123867  160181 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 23:34:41.123878  160181 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 23:34:41.123885  160181 command_runner.go:130] > # stream_port = "0"
	I1212 23:34:41.123898  160181 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 23:34:41.123908  160181 command_runner.go:130] > # stream_enable_tls = false
	I1212 23:34:41.123921  160181 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 23:34:41.123931  160181 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 23:34:41.123941  160181 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 23:34:41.123952  160181 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 23:34:41.123957  160181 command_runner.go:130] > # minutes.
	I1212 23:34:41.123967  160181 command_runner.go:130] > # stream_tls_cert = ""
	I1212 23:34:41.123978  160181 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 23:34:41.123992  160181 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 23:34:41.124002  160181 command_runner.go:130] > # stream_tls_key = ""
	I1212 23:34:41.124016  160181 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 23:34:41.124029  160181 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 23:34:41.124038  160181 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 23:34:41.124043  160181 command_runner.go:130] > # stream_tls_ca = ""
	I1212 23:34:41.124057  160181 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 23:34:41.124069  160181 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 23:34:41.124081  160181 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 23:34:41.124092  160181 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 23:34:41.124117  160181 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 23:34:41.124130  160181 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 23:34:41.124136  160181 command_runner.go:130] > [crio.runtime]
	I1212 23:34:41.124145  160181 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 23:34:41.124158  160181 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 23:34:41.124166  160181 command_runner.go:130] > # "nofile=1024:2048"
	I1212 23:34:41.124180  160181 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 23:34:41.124190  160181 command_runner.go:130] > # default_ulimits = [
	I1212 23:34:41.124199  160181 command_runner.go:130] > # ]
	I1212 23:34:41.124209  160181 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 23:34:41.124216  160181 command_runner.go:130] > # no_pivot = false
	I1212 23:34:41.124224  160181 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 23:34:41.124237  160181 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 23:34:41.124249  160181 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 23:34:41.124263  160181 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 23:34:41.124274  160181 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 23:34:41.124288  160181 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 23:34:41.124298  160181 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 23:34:41.124304  160181 command_runner.go:130] > # Cgroup setting for conmon
	I1212 23:34:41.124313  160181 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 23:34:41.124323  160181 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 23:34:41.124336  160181 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 23:34:41.124348  160181 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 23:34:41.124360  160181 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 23:34:41.124369  160181 command_runner.go:130] > conmon_env = [
	I1212 23:34:41.124379  160181 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 23:34:41.124387  160181 command_runner.go:130] > ]
	I1212 23:34:41.124393  160181 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 23:34:41.124401  160181 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 23:34:41.124414  160181 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 23:34:41.124425  160181 command_runner.go:130] > # default_env = [
	I1212 23:34:41.124448  160181 command_runner.go:130] > # ]
	I1212 23:34:41.124468  160181 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 23:34:41.124477  160181 command_runner.go:130] > # selinux = false
	I1212 23:34:41.124488  160181 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 23:34:41.124502  160181 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 23:34:41.124516  160181 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 23:34:41.124526  160181 command_runner.go:130] > # seccomp_profile = ""
	I1212 23:34:41.124538  160181 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 23:34:41.124550  160181 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 23:34:41.124561  160181 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 23:34:41.124567  160181 command_runner.go:130] > # which might increase security.
	I1212 23:34:41.124578  160181 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 23:34:41.124592  160181 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 23:34:41.124605  160181 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 23:34:41.124619  160181 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 23:34:41.124632  160181 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 23:34:41.124644  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:34:41.124652  160181 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 23:34:41.124664  160181 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 23:34:41.124675  160181 command_runner.go:130] > # the cgroup blockio controller.
	I1212 23:34:41.124687  160181 command_runner.go:130] > # blockio_config_file = ""
	I1212 23:34:41.124700  160181 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 23:34:41.124710  160181 command_runner.go:130] > # irqbalance daemon.
	I1212 23:34:41.124723  160181 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 23:34:41.124736  160181 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 23:34:41.124744  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:34:41.124754  160181 command_runner.go:130] > # rdt_config_file = ""
	I1212 23:34:41.124767  160181 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 23:34:41.124778  160181 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 23:34:41.124791  160181 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 23:34:41.124802  160181 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 23:34:41.124815  160181 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 23:34:41.124826  160181 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 23:34:41.124835  160181 command_runner.go:130] > # will be added.
	I1212 23:34:41.124845  160181 command_runner.go:130] > # default_capabilities = [
	I1212 23:34:41.124855  160181 command_runner.go:130] > # 	"CHOWN",
	I1212 23:34:41.124866  160181 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 23:34:41.124875  160181 command_runner.go:130] > # 	"FSETID",
	I1212 23:34:41.124884  160181 command_runner.go:130] > # 	"FOWNER",
	I1212 23:34:41.124894  160181 command_runner.go:130] > # 	"SETGID",
	I1212 23:34:41.124903  160181 command_runner.go:130] > # 	"SETUID",
	I1212 23:34:41.124911  160181 command_runner.go:130] > # 	"SETPCAP",
	I1212 23:34:41.124918  160181 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 23:34:41.124924  160181 command_runner.go:130] > # 	"KILL",
	I1212 23:34:41.124933  160181 command_runner.go:130] > # ]
	I1212 23:34:41.124948  160181 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 23:34:41.124960  160181 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 23:34:41.124969  160181 command_runner.go:130] > # default_sysctls = [
	I1212 23:34:41.124976  160181 command_runner.go:130] > # ]
	I1212 23:34:41.124983  160181 command_runner.go:130] > # List of devices on the host that a
	I1212 23:34:41.124995  160181 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 23:34:41.125003  160181 command_runner.go:130] > # allowed_devices = [
	I1212 23:34:41.125012  160181 command_runner.go:130] > # 	"/dev/fuse",
	I1212 23:34:41.125021  160181 command_runner.go:130] > # ]
	I1212 23:34:41.125033  160181 command_runner.go:130] > # List of additional devices. specified as
	I1212 23:34:41.125048  160181 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 23:34:41.125060  160181 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 23:34:41.125085  160181 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 23:34:41.125093  160181 command_runner.go:130] > # additional_devices = [
	I1212 23:34:41.125099  160181 command_runner.go:130] > # ]
	I1212 23:34:41.125111  160181 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 23:34:41.125122  160181 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 23:34:41.125132  160181 command_runner.go:130] > # 	"/etc/cdi",
	I1212 23:34:41.125139  160181 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 23:34:41.125148  160181 command_runner.go:130] > # ]
	I1212 23:34:41.125158  160181 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 23:34:41.125171  160181 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 23:34:41.125178  160181 command_runner.go:130] > # Defaults to false.
	I1212 23:34:41.125184  160181 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 23:34:41.125198  160181 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 23:34:41.125212  160181 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 23:34:41.125222  160181 command_runner.go:130] > # hooks_dir = [
	I1212 23:34:41.125233  160181 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 23:34:41.125242  160181 command_runner.go:130] > # ]
	I1212 23:34:41.125255  160181 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 23:34:41.125266  160181 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 23:34:41.125273  160181 command_runner.go:130] > # its default mounts from the following two files:
	I1212 23:34:41.125283  160181 command_runner.go:130] > #
	I1212 23:34:41.125294  160181 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 23:34:41.125308  160181 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 23:34:41.125320  160181 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 23:34:41.125329  160181 command_runner.go:130] > #
	I1212 23:34:41.125341  160181 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 23:34:41.125353  160181 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 23:34:41.125364  160181 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 23:34:41.125373  160181 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 23:34:41.125382  160181 command_runner.go:130] > #
	I1212 23:34:41.125390  160181 command_runner.go:130] > # default_mounts_file = ""
	I1212 23:34:41.125402  160181 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 23:34:41.125416  160181 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 23:34:41.125426  160181 command_runner.go:130] > pids_limit = 1024
	I1212 23:34:41.125439  160181 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 23:34:41.125448  160181 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 23:34:41.125463  160181 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 23:34:41.125480  160181 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 23:34:41.125490  160181 command_runner.go:130] > # log_size_max = -1
	I1212 23:34:41.125504  160181 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 23:34:41.125514  160181 command_runner.go:130] > # log_to_journald = false
	I1212 23:34:41.125524  160181 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 23:34:41.125532  160181 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 23:34:41.125539  160181 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 23:34:41.125551  160181 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 23:34:41.125564  160181 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 23:34:41.125574  160181 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 23:34:41.125584  160181 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 23:34:41.125594  160181 command_runner.go:130] > # read_only = false
	I1212 23:34:41.125607  160181 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 23:34:41.125619  160181 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 23:34:41.125627  160181 command_runner.go:130] > # live configuration reload.
	I1212 23:34:41.125634  160181 command_runner.go:130] > # log_level = "info"
	I1212 23:34:41.125646  160181 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 23:34:41.125659  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:34:41.125669  160181 command_runner.go:130] > # log_filter = ""
	I1212 23:34:41.125679  160181 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 23:34:41.125692  160181 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 23:34:41.125699  160181 command_runner.go:130] > # separated by comma.
	I1212 23:34:41.125709  160181 command_runner.go:130] > # uid_mappings = ""
	I1212 23:34:41.125716  160181 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 23:34:41.125728  160181 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 23:34:41.125738  160181 command_runner.go:130] > # separated by comma.
	I1212 23:34:41.125749  160181 command_runner.go:130] > # gid_mappings = ""
	I1212 23:34:41.125759  160181 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 23:34:41.125772  160181 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 23:34:41.125785  160181 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 23:34:41.125795  160181 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 23:34:41.125804  160181 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 23:34:41.125815  160181 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 23:34:41.125829  160181 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 23:34:41.125839  160181 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 23:34:41.125852  160181 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 23:34:41.125865  160181 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 23:34:41.125877  160181 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 23:34:41.125886  160181 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 23:34:41.125892  160181 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 23:34:41.125905  160181 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 23:34:41.125917  160181 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 23:34:41.125926  160181 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 23:34:41.125938  160181 command_runner.go:130] > drop_infra_ctr = false
	I1212 23:34:41.125950  160181 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 23:34:41.125963  160181 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 23:34:41.125975  160181 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 23:34:41.125985  160181 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 23:34:41.125996  160181 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 23:34:41.126008  160181 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 23:34:41.126016  160181 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 23:34:41.126031  160181 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 23:34:41.126041  160181 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 23:34:41.126054  160181 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 23:34:41.126064  160181 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 23:34:41.126073  160181 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 23:34:41.126084  160181 command_runner.go:130] > # default_runtime = "runc"
	I1212 23:34:41.126097  160181 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 23:34:41.126112  160181 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 23:34:41.126129  160181 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 23:34:41.126140  160181 command_runner.go:130] > # creation as a file is not desired either.
	I1212 23:34:41.126153  160181 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 23:34:41.126164  160181 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 23:34:41.126176  160181 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 23:34:41.126185  160181 command_runner.go:130] > # ]
	I1212 23:34:41.126198  160181 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 23:34:41.126212  160181 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 23:34:41.126225  160181 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 23:34:41.126237  160181 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 23:34:41.126243  160181 command_runner.go:130] > #
	I1212 23:34:41.126252  160181 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 23:34:41.126264  160181 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 23:34:41.126275  160181 command_runner.go:130] > #  runtime_type = "oci"
	I1212 23:34:41.126286  160181 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 23:34:41.126297  160181 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 23:34:41.126307  160181 command_runner.go:130] > #  allowed_annotations = []
	I1212 23:34:41.126316  160181 command_runner.go:130] > # Where:
	I1212 23:34:41.126326  160181 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 23:34:41.126338  160181 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 23:34:41.126352  160181 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 23:34:41.126366  160181 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 23:34:41.126376  160181 command_runner.go:130] > #   in $PATH.
	I1212 23:34:41.126386  160181 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 23:34:41.126398  160181 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 23:34:41.126411  160181 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 23:34:41.126417  160181 command_runner.go:130] > #   state.
	I1212 23:34:41.126427  160181 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 23:34:41.126441  160181 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 23:34:41.126454  160181 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 23:34:41.126469  160181 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 23:34:41.126482  160181 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 23:34:41.126496  160181 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 23:34:41.126504  160181 command_runner.go:130] > #   The currently recognized values are:
	I1212 23:34:41.126518  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 23:34:41.126534  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 23:34:41.126548  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 23:34:41.126561  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 23:34:41.126573  160181 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 23:34:41.126586  160181 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 23:34:41.126595  160181 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 23:34:41.126605  160181 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 23:34:41.126617  160181 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 23:34:41.126628  160181 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 23:34:41.126639  160181 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 23:34:41.126646  160181 command_runner.go:130] > runtime_type = "oci"
	I1212 23:34:41.126656  160181 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 23:34:41.126664  160181 command_runner.go:130] > runtime_config_path = ""
	I1212 23:34:41.126673  160181 command_runner.go:130] > monitor_path = ""
	I1212 23:34:41.126679  160181 command_runner.go:130] > monitor_cgroup = ""
	I1212 23:34:41.126686  160181 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 23:34:41.126696  160181 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 23:34:41.126707  160181 command_runner.go:130] > # running containers
	I1212 23:34:41.126716  160181 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 23:34:41.126730  160181 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 23:34:41.126757  160181 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 23:34:41.126767  160181 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 23:34:41.126777  160181 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 23:34:41.126788  160181 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 23:34:41.126799  160181 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 23:34:41.126811  160181 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 23:34:41.126823  160181 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 23:34:41.126834  160181 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 23:34:41.126847  160181 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 23:34:41.126856  160181 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 23:34:41.126868  160181 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 23:34:41.126884  160181 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 23:34:41.126900  160181 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 23:34:41.126913  160181 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 23:34:41.126930  160181 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 23:34:41.126943  160181 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 23:34:41.126953  160181 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 23:34:41.126968  160181 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 23:34:41.126978  160181 command_runner.go:130] > # Example:
	I1212 23:34:41.126990  160181 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 23:34:41.127001  160181 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 23:34:41.127013  160181 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 23:34:41.127025  160181 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 23:34:41.127034  160181 command_runner.go:130] > # cpuset = 0
	I1212 23:34:41.127041  160181 command_runner.go:130] > # cpushares = "0-1"
	I1212 23:34:41.127047  160181 command_runner.go:130] > # Where:
	I1212 23:34:41.127058  160181 command_runner.go:130] > # The workload name is workload-type.
	I1212 23:34:41.127073  160181 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 23:34:41.127085  160181 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 23:34:41.127098  160181 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 23:34:41.127113  160181 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 23:34:41.127124  160181 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 23:34:41.127130  160181 command_runner.go:130] > # 
	I1212 23:34:41.127141  160181 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 23:34:41.127150  160181 command_runner.go:130] > #
	I1212 23:34:41.127163  160181 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 23:34:41.127176  160181 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 23:34:41.127189  160181 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 23:34:41.127202  160181 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 23:34:41.127212  160181 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 23:34:41.127221  160181 command_runner.go:130] > [crio.image]
	I1212 23:34:41.127235  160181 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 23:34:41.127246  160181 command_runner.go:130] > # default_transport = "docker://"
	I1212 23:34:41.127260  160181 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 23:34:41.127273  160181 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 23:34:41.127283  160181 command_runner.go:130] > # global_auth_file = ""
	I1212 23:34:41.127294  160181 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 23:34:41.127303  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:34:41.127311  160181 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 23:34:41.127325  160181 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 23:34:41.127338  160181 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 23:34:41.127350  160181 command_runner.go:130] > # This option supports live configuration reload.
	I1212 23:34:41.127360  160181 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 23:34:41.127372  160181 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 23:34:41.127385  160181 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 23:34:41.127394  160181 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 23:34:41.127406  160181 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 23:34:41.127417  160181 command_runner.go:130] > # pause_command = "/pause"
	I1212 23:34:41.127431  160181 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 23:34:41.127445  160181 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 23:34:41.127461  160181 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 23:34:41.127474  160181 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 23:34:41.127483  160181 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 23:34:41.127493  160181 command_runner.go:130] > # signature_policy = ""
	I1212 23:34:41.127506  160181 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 23:34:41.127520  160181 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 23:34:41.127530  160181 command_runner.go:130] > # changing them here.
	I1212 23:34:41.127537  160181 command_runner.go:130] > # insecure_registries = [
	I1212 23:34:41.127546  160181 command_runner.go:130] > # ]
	I1212 23:34:41.127562  160181 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 23:34:41.127571  160181 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 23:34:41.127581  160181 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 23:34:41.127593  160181 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 23:34:41.127604  160181 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 23:34:41.127618  160181 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 23:34:41.127627  160181 command_runner.go:130] > # CNI plugins.
	I1212 23:34:41.127636  160181 command_runner.go:130] > [crio.network]
	I1212 23:34:41.127649  160181 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 23:34:41.127657  160181 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 23:34:41.127664  160181 command_runner.go:130] > # cni_default_network = ""
	I1212 23:34:41.127672  160181 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 23:34:41.127679  160181 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 23:34:41.127691  160181 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 23:34:41.127702  160181 command_runner.go:130] > # plugin_dirs = [
	I1212 23:34:41.127709  160181 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 23:34:41.127718  160181 command_runner.go:130] > # ]
	I1212 23:34:41.127731  160181 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 23:34:41.127741  160181 command_runner.go:130] > [crio.metrics]
	I1212 23:34:41.127752  160181 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 23:34:41.127760  160181 command_runner.go:130] > enable_metrics = true
	I1212 23:34:41.127767  160181 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 23:34:41.127772  160181 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 23:34:41.127780  160181 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 23:34:41.127789  160181 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 23:34:41.127795  160181 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 23:34:41.127802  160181 command_runner.go:130] > # metrics_collectors = [
	I1212 23:34:41.127805  160181 command_runner.go:130] > # 	"operations",
	I1212 23:34:41.127813  160181 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 23:34:41.127820  160181 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 23:34:41.127824  160181 command_runner.go:130] > # 	"operations_errors",
	I1212 23:34:41.127833  160181 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 23:34:41.127844  160181 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 23:34:41.127855  160181 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 23:34:41.127864  160181 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 23:34:41.127874  160181 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 23:34:41.127884  160181 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 23:34:41.127894  160181 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 23:34:41.127904  160181 command_runner.go:130] > # 	"containers_oom_total",
	I1212 23:34:41.127912  160181 command_runner.go:130] > # 	"containers_oom",
	I1212 23:34:41.127916  160181 command_runner.go:130] > # 	"processes_defunct",
	I1212 23:34:41.127923  160181 command_runner.go:130] > # 	"operations_total",
	I1212 23:34:41.127928  160181 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 23:34:41.127934  160181 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 23:34:41.127939  160181 command_runner.go:130] > # 	"operations_errors_total",
	I1212 23:34:41.127946  160181 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 23:34:41.127950  160181 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 23:34:41.127957  160181 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 23:34:41.127962  160181 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 23:34:41.127968  160181 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 23:34:41.127973  160181 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 23:34:41.127978  160181 command_runner.go:130] > # ]
	I1212 23:34:41.127983  160181 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 23:34:41.127989  160181 command_runner.go:130] > # metrics_port = 9090
	I1212 23:34:41.127995  160181 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 23:34:41.128001  160181 command_runner.go:130] > # metrics_socket = ""
	I1212 23:34:41.128006  160181 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 23:34:41.128015  160181 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 23:34:41.128023  160181 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 23:34:41.128029  160181 command_runner.go:130] > # certificate on any modification event.
	I1212 23:34:41.128035  160181 command_runner.go:130] > # metrics_cert = ""
	I1212 23:34:41.128040  160181 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 23:34:41.128047  160181 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 23:34:41.128052  160181 command_runner.go:130] > # metrics_key = ""
	I1212 23:34:41.128063  160181 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 23:34:41.128073  160181 command_runner.go:130] > [crio.tracing]
	I1212 23:34:41.128085  160181 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 23:34:41.128092  160181 command_runner.go:130] > # enable_tracing = false
	I1212 23:34:41.128097  160181 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 23:34:41.128104  160181 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 23:34:41.128110  160181 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 23:34:41.128117  160181 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 23:34:41.128123  160181 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 23:34:41.128129  160181 command_runner.go:130] > [crio.stats]
	I1212 23:34:41.128134  160181 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 23:34:41.128142  160181 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 23:34:41.128148  160181 command_runner.go:130] > # stats_collection_period = 0
	I1212 23:34:41.128203  160181 cni.go:84] Creating CNI manager for ""
	I1212 23:34:41.128214  160181 cni.go:136] 3 nodes found, recommending kindnet
	I1212 23:34:41.128225  160181 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:34:41.128251  160181 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.133 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-510563 NodeName:multinode-510563-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.133 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:34:41.128347  160181 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.133
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-510563-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.133
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:34:41.128392  160181 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-510563-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:34:41.128462  160181 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:34:41.140591  160181 command_runner.go:130] > kubeadm
	I1212 23:34:41.140624  160181 command_runner.go:130] > kubectl
	I1212 23:34:41.140630  160181 command_runner.go:130] > kubelet
	I1212 23:34:41.140697  160181 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:34:41.140763  160181 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 23:34:41.151695  160181 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1212 23:34:41.168653  160181 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:34:41.184538  160181 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I1212 23:34:41.188091  160181 command_runner.go:130] > 192.168.39.38	control-plane.minikube.internal
	I1212 23:34:41.188382  160181 host.go:66] Checking if "multinode-510563" exists ...
	I1212 23:34:41.188699  160181 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:34:41.188798  160181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:34:41.188863  160181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:34:41.203391  160181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36021
	I1212 23:34:41.203798  160181 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:34:41.204247  160181 main.go:141] libmachine: Using API Version  1
	I1212 23:34:41.204269  160181 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:34:41.204602  160181 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:34:41.204796  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:34:41.204984  160181 start.go:304] JoinCluster: &{Name:multinode-510563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-510563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.109 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.133 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:34:41.205088  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 23:34:41.205110  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:34:41.207777  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:34:41.208099  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:34:41.208122  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:34:41.208289  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:34:41.208485  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:34:41.208638  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:34:41.208802  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:34:41.391255  160181 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zx6qmi.od07miq0t2zqq349 --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1212 23:34:41.393874  160181 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.133 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1212 23:34:41.393917  160181 host.go:66] Checking if "multinode-510563" exists ...
	I1212 23:34:41.394240  160181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:34:41.394284  160181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:34:41.409437  160181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40217
	I1212 23:34:41.409897  160181 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:34:41.410350  160181 main.go:141] libmachine: Using API Version  1
	I1212 23:34:41.410371  160181 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:34:41.410701  160181 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:34:41.410888  160181 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:34:41.411116  160181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-510563-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1212 23:34:41.411139  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:34:41.414397  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:34:41.414938  160181 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:34:41.414976  160181 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:34:41.415123  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:34:41.415322  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:34:41.415509  160181 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:34:41.415690  160181 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:34:41.608227  160181 command_runner.go:130] > node/multinode-510563-m03 cordoned
	I1212 23:34:44.650077  160181 command_runner.go:130] > pod "busybox-5bc68d56bd-5hvf4" has DeletionTimestamp older than 1 seconds, skipping
	I1212 23:34:44.650103  160181 command_runner.go:130] > node/multinode-510563-m03 drained
	I1212 23:34:44.651786  160181 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1212 23:34:44.651811  160181 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-lqdxw, kube-system/kube-proxy-fbk65
	I1212 23:34:44.651842  160181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-510563-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.240698706s)
	I1212 23:34:44.651860  160181 node.go:108] successfully drained node "m03"
	I1212 23:34:44.652188  160181 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:34:44.652392  160181 kapi.go:59] client config for multinode-510563: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:34:44.652726  160181 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1212 23:34:44.652783  160181 round_trippers.go:463] DELETE https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m03
	I1212 23:34:44.652789  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:44.652798  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:44.652805  160181 round_trippers.go:473]     Content-Type: application/json
	I1212 23:34:44.652813  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:44.664601  160181 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1212 23:34:44.664627  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:44.664637  160181 round_trippers.go:580]     Audit-Id: 9b96eb8e-3cfa-4c68-92b2-954673d8dc50
	I1212 23:34:44.664645  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:44.664652  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:44.664660  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:44.664668  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:44.664681  160181 round_trippers.go:580]     Content-Length: 171
	I1212 23:34:44.664691  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:44 GMT
	I1212 23:34:44.664719  160181 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-510563-m03","kind":"nodes","uid":"6de0e5a4-53e7-4397-9be8-0053fa116498"}}
	I1212 23:34:44.664754  160181 node.go:124] successfully deleted node "m03"
	I1212 23:34:44.664773  160181 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.133 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1212 23:34:44.664793  160181 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.133 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1212 23:34:44.664821  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zx6qmi.od07miq0t2zqq349 --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-510563-m03"
	I1212 23:34:44.724607  160181 command_runner.go:130] ! W1212 23:34:44.716509    2383 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1212 23:34:44.724703  160181 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1212 23:34:44.860739  160181 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1212 23:34:44.860773  160181 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1212 23:34:45.632154  160181 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 23:34:45.632188  160181 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1212 23:34:45.632205  160181 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1212 23:34:45.632216  160181 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:34:45.632227  160181 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:34:45.632236  160181 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 23:34:45.632250  160181 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1212 23:34:45.632262  160181 command_runner.go:130] > This node has joined the cluster:
	I1212 23:34:45.632274  160181 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1212 23:34:45.632285  160181 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1212 23:34:45.632299  160181 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1212 23:34:45.632334  160181 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 23:34:45.918869  160181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=multinode-510563 minikube.k8s.io/updated_at=2023_12_12T23_34_45_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:34:46.033856  160181 command_runner.go:130] > node/multinode-510563-m02 labeled
	I1212 23:34:46.043417  160181 command_runner.go:130] > node/multinode-510563-m03 labeled
	I1212 23:34:46.045792  160181 start.go:306] JoinCluster complete in 4.840804732s
	I1212 23:34:46.045822  160181 cni.go:84] Creating CNI manager for ""
	I1212 23:34:46.045830  160181 cni.go:136] 3 nodes found, recommending kindnet
	I1212 23:34:46.045891  160181 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 23:34:46.052336  160181 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 23:34:46.052359  160181 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 23:34:46.052368  160181 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 23:34:46.052378  160181 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 23:34:46.052387  160181 command_runner.go:130] > Access: 2023-12-12 23:30:35.501189212 +0000
	I1212 23:34:46.052401  160181 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 23:34:46.052410  160181 command_runner.go:130] > Change: 2023-12-12 23:30:33.624189212 +0000
	I1212 23:34:46.052420  160181 command_runner.go:130] >  Birth: -
	I1212 23:34:46.052474  160181 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 23:34:46.052487  160181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 23:34:46.078096  160181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 23:34:46.395876  160181 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 23:34:46.400123  160181 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 23:34:46.404463  160181 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 23:34:46.417293  160181 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 23:34:46.420693  160181 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:34:46.421001  160181 kapi.go:59] client config for multinode-510563: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:34:46.421397  160181 round_trippers.go:463] GET https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 23:34:46.421413  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:46.421425  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:46.421435  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:46.423952  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:34:46.423969  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:46.423978  160181 round_trippers.go:580]     Content-Length: 291
	I1212 23:34:46.423986  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:46 GMT
	I1212 23:34:46.423994  160181 round_trippers.go:580]     Audit-Id: 96c5f35f-b6bd-4a3b-a77b-8f51b82ded26
	I1212 23:34:46.424007  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:46.424016  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:46.424029  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:46.424040  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:46.424066  160181 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b98a4ef4-74a2-4d35-a29b-c065c5f3121c","resourceVersion":"907","creationTimestamp":"2023-12-12T23:20:36Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 23:34:46.424162  160181 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-510563" context rescaled to 1 replicas
	I1212 23:34:46.424191  160181 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.133 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1212 23:34:46.426162  160181 out.go:177] * Verifying Kubernetes components...
	I1212 23:34:46.427607  160181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:34:46.442527  160181 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:34:46.442736  160181 kapi.go:59] client config for multinode-510563: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/multinode-510563/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:34:46.442974  160181 node_ready.go:35] waiting up to 6m0s for node "multinode-510563-m03" to be "Ready" ...
	I1212 23:34:46.443034  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m03
	I1212 23:34:46.443041  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:46.443048  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:46.443054  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:46.446804  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:34:46.446827  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:46.446836  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:46.446845  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:46.446852  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:46.446860  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:46 GMT
	I1212 23:34:46.446868  160181 round_trippers.go:580]     Audit-Id: 10250aa1-e234-4bb5-a206-d0a35e24b094
	I1212 23:34:46.446878  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:46.447517  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m03","uid":"3755c9f4-1b3d-4c84-8733-ec4cec8b6525","resourceVersion":"1240","creationTimestamp":"2023-12-12T23:34:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_34_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:34:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1212 23:34:46.447779  160181 node_ready.go:49] node "multinode-510563-m03" has status "Ready":"True"
	I1212 23:34:46.447794  160181 node_ready.go:38] duration metric: took 4.806772ms waiting for node "multinode-510563-m03" to be "Ready" ...
	I1212 23:34:46.447803  160181 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:34:46.447883  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I1212 23:34:46.447892  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:46.447903  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:46.447912  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:46.456105  160181 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 23:34:46.456125  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:46.456135  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:46.456140  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:46.456145  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:46 GMT
	I1212 23:34:46.456151  160181 round_trippers.go:580]     Audit-Id: fbee5333-f89a-42b6-96d3-a1662f294392
	I1212 23:34:46.456156  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:46.456176  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:46.459062  160181 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1247"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"894","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82047 chars]
	I1212 23:34:46.461634  160181 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:46.461721  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcxks
	I1212 23:34:46.461738  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:46.461749  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:46.461758  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:46.465330  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:34:46.465351  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:46.465357  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:46.465365  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:46.465370  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:46 GMT
	I1212 23:34:46.465376  160181 round_trippers.go:580]     Audit-Id: e17c2f56-bf32-42e0-a357-2f495c2bec92
	I1212 23:34:46.465381  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:46.465388  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:46.465529  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcxks","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"503de693-19d6-45c5-97c6-3b8e5657bfee","resourceVersion":"894","creationTimestamp":"2023-12-12T23:20:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"deafbe12-0e38-4eba-b2b1-1422139de220","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"deafbe12-0e38-4eba-b2b1-1422139de220\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1212 23:34:46.466019  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:34:46.466034  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:46.466041  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:46.466047  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:46.467887  160181 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:34:46.467900  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:46.467906  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:46.467912  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:46.467917  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:46.467923  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:46.467932  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:46 GMT
	I1212 23:34:46.467940  160181 round_trippers.go:580]     Audit-Id: 6507f858-818e-4130-8f30-00454c23d2d3
	I1212 23:34:46.468130  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 23:34:46.468528  160181 pod_ready.go:92] pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace has status "Ready":"True"
	I1212 23:34:46.468550  160181 pod_ready.go:81] duration metric: took 6.889675ms waiting for pod "coredns-5dd5756b68-zcxks" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:46.468562  160181 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:46.468618  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-510563
	I1212 23:34:46.468628  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:46.468639  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:46.468651  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:46.470699  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:34:46.470710  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:46.470715  160181 round_trippers.go:580]     Audit-Id: 2d6ec610-f13c-46b6-94d8-f9b27ce7c189
	I1212 23:34:46.470720  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:46.470725  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:46.470730  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:46.470736  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:46.470741  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:46 GMT
	I1212 23:34:46.470964  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-510563","namespace":"kube-system","uid":"2748a67b-24f2-4b90-bf95-eb56755a397a","resourceVersion":"917","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.mirror":"99da52f53b721a1a612acc1bca02d501","kubernetes.io/config.seen":"2023-12-12T23:20:36.354957049Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1212 23:34:46.471285  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:34:46.471299  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:46.471306  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:46.471311  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:46.473270  160181 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:34:46.473288  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:46.473297  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:46 GMT
	I1212 23:34:46.473304  160181 round_trippers.go:580]     Audit-Id: 4fd9f7c3-c8dd-4665-8c58-434705170905
	I1212 23:34:46.473311  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:46.473319  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:46.473327  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:46.473338  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:46.473570  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 23:34:46.473848  160181 pod_ready.go:92] pod "etcd-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:34:46.473862  160181 pod_ready.go:81] duration metric: took 5.293409ms waiting for pod "etcd-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:46.473876  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:46.473929  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-510563
	I1212 23:34:46.473939  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:46.473949  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:46.473959  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:46.475752  160181 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:34:46.475770  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:46.475779  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:46 GMT
	I1212 23:34:46.475787  160181 round_trippers.go:580]     Audit-Id: 752a5dab-9528-4f0e-a671-94c2b839c0ad
	I1212 23:34:46.475795  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:46.475803  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:46.475811  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:46.475826  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:46.475968  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-510563","namespace":"kube-system","uid":"e8a8ed00-d13d-44f0-b7d6-b42bf1342d95","resourceVersion":"900","creationTimestamp":"2023-12-12T23:20:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.38:8443","kubernetes.io/config.hash":"4b970951c1b4ca2bc525afa7c2eb2fef","kubernetes.io/config.mirror":"4b970951c1b4ca2bc525afa7c2eb2fef","kubernetes.io/config.seen":"2023-12-12T23:20:27.932579600Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1212 23:34:46.476375  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:34:46.476388  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:46.476400  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:46.476407  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:46.478230  160181 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:34:46.478243  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:46.478249  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:46.478254  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:46.478261  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:46 GMT
	I1212 23:34:46.478269  160181 round_trippers.go:580]     Audit-Id: ade14104-c4f3-4db7-acea-a94d2460f28f
	I1212 23:34:46.478276  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:46.478284  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:46.478471  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 23:34:46.478811  160181 pod_ready.go:92] pod "kube-apiserver-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:34:46.478826  160181 pod_ready.go:81] duration metric: took 4.943146ms waiting for pod "kube-apiserver-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:46.478836  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:46.478886  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-510563
	I1212 23:34:46.478896  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:46.478906  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:46.478912  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:46.480756  160181 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 23:34:46.480775  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:46.480783  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:46 GMT
	I1212 23:34:46.480791  160181 round_trippers.go:580]     Audit-Id: e107d1dc-6cd7-49ef-b606-6ea4018faced
	I1212 23:34:46.480798  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:46.480805  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:46.480812  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:46.480824  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:46.480986  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-510563","namespace":"kube-system","uid":"efdc7f68-25d6-4f6a-ab8f-1dec43407375","resourceVersion":"887","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"50f588add554ab298cca0792048dbecc","kubernetes.io/config.mirror":"50f588add554ab298cca0792048dbecc","kubernetes.io/config.seen":"2023-12-12T23:20:36.354954910Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1212 23:34:46.481383  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:34:46.481398  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:46.481409  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:46.481419  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:46.483893  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:34:46.483911  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:46.483920  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:46.483928  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:46.483936  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:46.483944  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:46.483952  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:46 GMT
	I1212 23:34:46.483963  160181 round_trippers.go:580]     Audit-Id: 6313c243-29a5-475e-a06c-8b3648526fcd
	I1212 23:34:46.484373  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 23:34:46.484699  160181 pod_ready.go:92] pod "kube-controller-manager-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:34:46.484715  160181 pod_ready.go:81] duration metric: took 5.872547ms waiting for pod "kube-controller-manager-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:46.484723  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fbk65" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:46.644110  160181 request.go:629] Waited for 159.326218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fbk65
	I1212 23:34:46.644193  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fbk65
	I1212 23:34:46.644200  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:46.644216  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:46.644236  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:46.647423  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:34:46.647447  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:46.647458  160181 round_trippers.go:580]     Audit-Id: f234859e-117c-4088-8a21-7c9286c6902d
	I1212 23:34:46.647467  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:46.647474  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:46.647482  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:46.647490  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:46.647499  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:46 GMT
	I1212 23:34:46.648093  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fbk65","generateName":"kube-proxy-","namespace":"kube-system","uid":"478c2dce-ac51-47ac-9d34-20dc7c331056","resourceVersion":"1245","creationTimestamp":"2023-12-12T23:22:27Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:22:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I1212 23:34:46.844061  160181 request.go:629] Waited for 195.396018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m03
	I1212 23:34:46.844134  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m03
	I1212 23:34:46.844140  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:46.844147  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:46.844157  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:46.847010  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:34:46.847038  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:46.847049  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:46.847058  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:46.847067  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:46.847075  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:46.847085  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:46 GMT
	I1212 23:34:46.847095  160181 round_trippers.go:580]     Audit-Id: 68f00777-8e78-4d15-930a-3e21d61ecf3d
	I1212 23:34:46.847260  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m03","uid":"3755c9f4-1b3d-4c84-8733-ec4cec8b6525","resourceVersion":"1240","creationTimestamp":"2023-12-12T23:34:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_34_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:34:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1212 23:34:47.043982  160181 request.go:629] Waited for 196.37658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fbk65
	I1212 23:34:47.044059  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fbk65
	I1212 23:34:47.044067  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:47.044075  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:47.044083  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:47.048073  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:34:47.048096  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:47.048105  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:47 GMT
	I1212 23:34:47.048113  160181 round_trippers.go:580]     Audit-Id: 6b5e3467-d52e-416f-94d8-da07ffbeb6b7
	I1212 23:34:47.048120  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:47.048127  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:47.048138  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:47.048147  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:47.048620  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fbk65","generateName":"kube-proxy-","namespace":"kube-system","uid":"478c2dce-ac51-47ac-9d34-20dc7c331056","resourceVersion":"1245","creationTimestamp":"2023-12-12T23:22:27Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:22:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I1212 23:34:47.243547  160181 request.go:629] Waited for 194.38593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m03
	I1212 23:34:47.243603  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m03
	I1212 23:34:47.243609  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:47.243616  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:47.243626  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:47.246345  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:34:47.246365  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:47.246372  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:47 GMT
	I1212 23:34:47.246377  160181 round_trippers.go:580]     Audit-Id: 447602a2-ec4e-470c-939f-329617a5f871
	I1212 23:34:47.246382  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:47.246388  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:47.246393  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:47.246398  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:47.246509  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m03","uid":"3755c9f4-1b3d-4c84-8733-ec4cec8b6525","resourceVersion":"1240","creationTimestamp":"2023-12-12T23:34:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_34_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:34:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1212 23:34:47.747579  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fbk65
	I1212 23:34:47.747604  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:47.747612  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:47.747618  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:47.750732  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:34:47.750754  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:47.750764  160181 round_trippers.go:580]     Audit-Id: f1698bc4-671a-4a33-8ee2-a10e047bf628
	I1212 23:34:47.750773  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:47.750781  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:47.750793  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:47.750801  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:47.750810  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:47 GMT
	I1212 23:34:47.751070  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fbk65","generateName":"kube-proxy-","namespace":"kube-system","uid":"478c2dce-ac51-47ac-9d34-20dc7c331056","resourceVersion":"1255","creationTimestamp":"2023-12-12T23:22:27Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:22:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1212 23:34:47.751579  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m03
	I1212 23:34:47.751603  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:47.751614  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:47.751622  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:47.754018  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:34:47.754043  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:47.754052  160181 round_trippers.go:580]     Audit-Id: 6a2a4c99-6d42-4584-8ddd-0fa980872378
	I1212 23:34:47.754061  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:47.754069  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:47.754077  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:47.754085  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:47.754095  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:47 GMT
	I1212 23:34:47.754359  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m03","uid":"3755c9f4-1b3d-4c84-8733-ec4cec8b6525","resourceVersion":"1240","creationTimestamp":"2023-12-12T23:34:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_34_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:34:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1212 23:34:47.754615  160181 pod_ready.go:92] pod "kube-proxy-fbk65" in "kube-system" namespace has status "Ready":"True"
	I1212 23:34:47.754631  160181 pod_ready.go:81] duration metric: took 1.269901706s waiting for pod "kube-proxy-fbk65" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:47.754641  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hspw8" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:47.843965  160181 request.go:629] Waited for 89.266478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hspw8
	I1212 23:34:47.844038  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hspw8
	I1212 23:34:47.844044  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:47.844051  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:47.844057  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:47.848271  160181 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 23:34:47.848290  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:47.848301  160181 round_trippers.go:580]     Audit-Id: ad6ce24a-e5ca-4550-8780-a25360da472e
	I1212 23:34:47.848306  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:47.848311  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:47.848317  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:47.848326  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:47.848337  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:47 GMT
	I1212 23:34:47.848662  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hspw8","generateName":"kube-proxy-","namespace":"kube-system","uid":"a2255be6-8705-40cd-8f35-a3e82906190c","resourceVersion":"855","creationTimestamp":"2023-12-12T23:20:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1212 23:34:48.043885  160181 request.go:629] Waited for 194.799253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:34:48.043947  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:34:48.043952  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:48.043960  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:48.043966  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:48.046839  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:34:48.046864  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:48.046873  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:48.046881  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:48.046888  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:48 GMT
	I1212 23:34:48.046896  160181 round_trippers.go:580]     Audit-Id: 53450a0b-6b55-4cca-81fc-a145727d70d0
	I1212 23:34:48.046903  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:48.046915  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:48.047415  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 23:34:48.047747  160181 pod_ready.go:92] pod "kube-proxy-hspw8" in "kube-system" namespace has status "Ready":"True"
	I1212 23:34:48.047762  160181 pod_ready.go:81] duration metric: took 293.115266ms waiting for pod "kube-proxy-hspw8" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:48.047770  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-msx8s" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:48.243105  160181 request.go:629] Waited for 195.276397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msx8s
	I1212 23:34:48.243185  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msx8s
	I1212 23:34:48.243192  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:48.243204  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:48.243219  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:48.246207  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:34:48.246227  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:48.246236  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:48.246244  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:48 GMT
	I1212 23:34:48.246252  160181 round_trippers.go:580]     Audit-Id: bcd112fe-bae4-4005-b958-c659f48cec5f
	I1212 23:34:48.246259  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:48.246266  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:48.246279  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:48.246621  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-msx8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"f41b9a6d-8132-45a6-9847-5a762664b008","resourceVersion":"1079","creationTimestamp":"2023-12-12T23:21:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d1b4c800-a24b-499d-bbe3-4a554353bc2e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:21:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1b4c800-a24b-499d-bbe3-4a554353bc2e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1212 23:34:48.443388  160181 request.go:629] Waited for 196.357031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:34:48.443476  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563-m02
	I1212 23:34:48.443488  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:48.443504  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:48.443514  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:48.445976  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:34:48.445999  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:48.446009  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:48.446018  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:48.446027  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:48.446037  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:48.446049  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:48 GMT
	I1212 23:34:48.446060  160181 round_trippers.go:580]     Audit-Id: 26891040-4669-431f-8fc2-f6dda604e98c
	I1212 23:34:48.446382  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563-m02","uid":"624fd936-a780-4801-8d74-d9563b64b861","resourceVersion":"1239","creationTimestamp":"2023-12-12T23:33:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T23_34_45_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1212 23:34:48.446640  160181 pod_ready.go:92] pod "kube-proxy-msx8s" in "kube-system" namespace has status "Ready":"True"
	I1212 23:34:48.446657  160181 pod_ready.go:81] duration metric: took 398.877275ms waiting for pod "kube-proxy-msx8s" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:48.446669  160181 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:48.644096  160181 request.go:629] Waited for 197.363527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-510563
	I1212 23:34:48.644162  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-510563
	I1212 23:34:48.644168  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:48.644175  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:48.644194  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:48.647312  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:34:48.647333  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:48.647342  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:48.647350  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:48.647358  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:48.647366  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:48 GMT
	I1212 23:34:48.647374  160181 round_trippers.go:580]     Audit-Id: 22080eac-9650-4eb5-9ca5-79406588bd6f
	I1212 23:34:48.647384  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:48.647555  160181 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-510563","namespace":"kube-system","uid":"044da73c-9466-4a43-b283-5f4b9cc04df9","resourceVersion":"895","creationTimestamp":"2023-12-12T23:20:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fdb335c77d5fb1581ea23fa0adf419e9","kubernetes.io/config.mirror":"fdb335c77d5fb1581ea23fa0adf419e9","kubernetes.io/config.seen":"2023-12-12T23:20:36.354955844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T23:20:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1212 23:34:48.843159  160181 request.go:629] Waited for 195.272729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:34:48.843246  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-510563
	I1212 23:34:48.843256  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:48.843268  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:48.843279  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:48.846145  160181 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 23:34:48.846171  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:48.846182  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:48 GMT
	I1212 23:34:48.846191  160181 round_trippers.go:580]     Audit-Id: 3a4bcbb9-c65b-4b3d-9940-0323d211423c
	I1212 23:34:48.846200  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:48.846210  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:48.846215  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:48.846220  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:48.846896  160181 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T23:20:33Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1212 23:34:48.847204  160181 pod_ready.go:92] pod "kube-scheduler-multinode-510563" in "kube-system" namespace has status "Ready":"True"
	I1212 23:34:48.847221  160181 pod_ready.go:81] duration metric: took 400.543174ms waiting for pod "kube-scheduler-multinode-510563" in "kube-system" namespace to be "Ready" ...
	I1212 23:34:48.847235  160181 pod_ready.go:38] duration metric: took 2.399418752s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:34:48.847256  160181 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:34:48.847319  160181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:34:48.863205  160181 system_svc.go:56] duration metric: took 15.94291ms WaitForService to wait for kubelet.
	I1212 23:34:48.863232  160181 kubeadm.go:581] duration metric: took 2.439012973s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:34:48.863255  160181 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:34:49.043666  160181 request.go:629] Waited for 180.332979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes
	I1212 23:34:49.043728  160181 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes
	I1212 23:34:49.043732  160181 round_trippers.go:469] Request Headers:
	I1212 23:34:49.043740  160181 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 23:34:49.043746  160181 round_trippers.go:473]     Accept: application/json, */*
	I1212 23:34:49.047101  160181 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 23:34:49.047129  160181 round_trippers.go:577] Response Headers:
	I1212 23:34:49.047140  160181 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df971f46-edf8-4ffc-96fe-4a4b705d6ee1
	I1212 23:34:49.047148  160181 round_trippers.go:580]     Date: Tue, 12 Dec 2023 23:34:49 GMT
	I1212 23:34:49.047156  160181 round_trippers.go:580]     Audit-Id: d62ec507-12fa-43fe-a2be-d3fc666ef49a
	I1212 23:34:49.047164  160181 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 23:34:49.047177  160181 round_trippers.go:580]     Content-Type: application/json
	I1212 23:34:49.047185  160181 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6aa8bd9a-c8b5-4b36-bbcb-b424fb90d316
	I1212 23:34:49.047497  160181 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1259"},"items":[{"metadata":{"name":"multinode-510563","uid":"ec27a0e0-5a23-452b-a491-bdd5e109f20c","resourceVersion":"926","creationTimestamp":"2023-12-12T23:20:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-510563","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a4cfdd7fe6105c8f2fb237e157ac115c68ce5446","minikube.k8s.io/name":"multinode-510563","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T23_20_37_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16238 chars]
	I1212 23:34:49.048062  160181 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:34:49.048100  160181 node_conditions.go:123] node cpu capacity is 2
	I1212 23:34:49.048109  160181 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:34:49.048113  160181 node_conditions.go:123] node cpu capacity is 2
	I1212 23:34:49.048116  160181 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:34:49.048123  160181 node_conditions.go:123] node cpu capacity is 2
	I1212 23:34:49.048127  160181 node_conditions.go:105] duration metric: took 184.866641ms to run NodePressure ...
	I1212 23:34:49.048137  160181 start.go:228] waiting for startup goroutines ...
	I1212 23:34:49.048173  160181 start.go:242] writing updated cluster config ...
	I1212 23:34:49.048499  160181 ssh_runner.go:195] Run: rm -f paused
	I1212 23:34:49.099119  160181 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 23:34:49.102262  160181 out.go:177] * Done! kubectl is now configured to use "multinode-510563" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:30:34 UTC, ends at Tue 2023-12-12 23:34:50 UTC. --
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.223571428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424090223556342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1bd61eea-8852-4c4b-bb77-84a2afeae9f3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.224465777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=da89c745-8c86-4acf-98bd-2b9bb2397c2f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.224537696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=da89c745-8c86-4acf-98bd-2b9bb2397c2f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.224754502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bae88a78ebe3dee248c17f6d00943f51c4c4a482759d1effc885a8f1c364f7da,PodSandboxId:6b590e5c037a914d38bc67e1e0ab4df6388726f0bd5700dde95cf3396cb87f11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423899783314591,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb4f186a-9bb9-488f-8a74-6e01f352fc05,},Annotations:map[string]string{io.kubernetes.container.hash: 181c10d3,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8bf85d3341da7b64f58d8af0a6922244e5859445b43c27a9ca48228cb9c12c,PodSandboxId:4f3017cf9387179fddeedd476a49eda42e0aaecb265e9d236555995c97644bef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702423879829569652,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4vnmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00d42ae1-e3c5-461d-9019-b5609191598e,},Annotations:map[string]string{io.kubernetes.container.hash: a8d1b557,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5521be98891ba84419793936c52e9aeead9036c5083ec0e13681ca2d099f62,PodSandboxId:1ddbee372f5bb92000429ddb9662c4093ba89eefb9ba78a12e611888aea2b214,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423876027934477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcxks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503de693-19d6-45c5-97c6-3b8e5657bfee,},Annotations:map[string]string{io.kubernetes.container.hash: ced1e245,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff81a9793cfbf54fb97f6a304d83f531e6d40b79a58007929b347734a437c36c,PodSandboxId:1a1a4581ab6fce9a70ac23f6c277791499710b0871e3fb5c376273ce8a72db84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702423871144885706,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v4js8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cfe24f85-472c-4ef2-9a48-9e3647cc8feb,},Annotations:map[string]string{io.kubernetes.container.hash: a0875e0f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914450b1d9b01f7117b9650eedbdf645d9a82d19515c259fcd5cb7d797532c06,PodSandboxId:6b590e5c037a914d38bc67e1e0ab4df6388726f0bd5700dde95cf3396cb87f11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423868580885437,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: cb4f186a-9bb9-488f-8a74-6e01f352fc05,},Annotations:map[string]string{io.kubernetes.container.hash: 181c10d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3123c4e232d7f0faa04b6438f062448e708a44701c771b97de3270116f14d817,PodSandboxId:40aa5f0684fdd3ae2494c62637358fff7780ac40dae4347391ca56a529788b58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423868530592930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hspw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2255be6-8705-40cd-8f35-a3e82906
190c,},Annotations:map[string]string{io.kubernetes.container.hash: d38d9edd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2728448b2edcc87ade50f2949703d0390920cde4dfff230c1e0825d6de6ac51,PodSandboxId:4c78ee6e9d0ddac7aa542bdb829e72f7180a33c6282679b7f26cb4a9c5a9409a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423862085550676,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99da52f53b721a1a612acc1bca02d501,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 52ffbf68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17c982c3dbaaf94f8207b5e97045e94777f9c341f427080f921cf73f79511a5e,PodSandboxId:e5d1e057659072c426613dd13be4020719795fd80058713725128846afa1efa6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423861858858935,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f588add554ab298cca0792048dbecc,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef36a1b668794db6a93b0fe6bd77304d10713cdade43b0c9b0a510a7dbdc4be,PodSandboxId:5c024524b750820d2645d2961242be1ae5272f476743045a565fc695c34eedb2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423861720398173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb335c77d5fb1581ea23fa0adf419e9,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c6dde8defdd0400b45f64646615de59c63d354f3aefa4ce2b8b549f04106d9,PodSandboxId:80324346e908f30a2ca2c8a54540dfcbbd6fec35a0785a566a089d5f82792324,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423861599865607,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b970951c1b4ca2bc525afa7c2eb2fef,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 46ec173a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=da89c745-8c86-4acf-98bd-2b9bb2397c2f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.267143865Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7389bb98-6672-4de7-9110-7c244aa06bdd name=/runtime.v1.RuntimeService/Version
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.267285439Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7389bb98-6672-4de7-9110-7c244aa06bdd name=/runtime.v1.RuntimeService/Version
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.268176184Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1d3ae5c2-eb2f-4389-8f81-f65b184eb151 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.268642161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424090268628309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1d3ae5c2-eb2f-4389-8f81-f65b184eb151 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.269359238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=acb9bcc3-1352-4e42-889c-7d0df322f3d0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.269430666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=acb9bcc3-1352-4e42-889c-7d0df322f3d0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.269676612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bae88a78ebe3dee248c17f6d00943f51c4c4a482759d1effc885a8f1c364f7da,PodSandboxId:6b590e5c037a914d38bc67e1e0ab4df6388726f0bd5700dde95cf3396cb87f11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423899783314591,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb4f186a-9bb9-488f-8a74-6e01f352fc05,},Annotations:map[string]string{io.kubernetes.container.hash: 181c10d3,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8bf85d3341da7b64f58d8af0a6922244e5859445b43c27a9ca48228cb9c12c,PodSandboxId:4f3017cf9387179fddeedd476a49eda42e0aaecb265e9d236555995c97644bef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702423879829569652,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4vnmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00d42ae1-e3c5-461d-9019-b5609191598e,},Annotations:map[string]string{io.kubernetes.container.hash: a8d1b557,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5521be98891ba84419793936c52e9aeead9036c5083ec0e13681ca2d099f62,PodSandboxId:1ddbee372f5bb92000429ddb9662c4093ba89eefb9ba78a12e611888aea2b214,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423876027934477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcxks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503de693-19d6-45c5-97c6-3b8e5657bfee,},Annotations:map[string]string{io.kubernetes.container.hash: ced1e245,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff81a9793cfbf54fb97f6a304d83f531e6d40b79a58007929b347734a437c36c,PodSandboxId:1a1a4581ab6fce9a70ac23f6c277791499710b0871e3fb5c376273ce8a72db84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702423871144885706,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v4js8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cfe24f85-472c-4ef2-9a48-9e3647cc8feb,},Annotations:map[string]string{io.kubernetes.container.hash: a0875e0f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914450b1d9b01f7117b9650eedbdf645d9a82d19515c259fcd5cb7d797532c06,PodSandboxId:6b590e5c037a914d38bc67e1e0ab4df6388726f0bd5700dde95cf3396cb87f11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423868580885437,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: cb4f186a-9bb9-488f-8a74-6e01f352fc05,},Annotations:map[string]string{io.kubernetes.container.hash: 181c10d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3123c4e232d7f0faa04b6438f062448e708a44701c771b97de3270116f14d817,PodSandboxId:40aa5f0684fdd3ae2494c62637358fff7780ac40dae4347391ca56a529788b58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423868530592930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hspw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2255be6-8705-40cd-8f35-a3e82906
190c,},Annotations:map[string]string{io.kubernetes.container.hash: d38d9edd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2728448b2edcc87ade50f2949703d0390920cde4dfff230c1e0825d6de6ac51,PodSandboxId:4c78ee6e9d0ddac7aa542bdb829e72f7180a33c6282679b7f26cb4a9c5a9409a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423862085550676,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99da52f53b721a1a612acc1bca02d501,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 52ffbf68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17c982c3dbaaf94f8207b5e97045e94777f9c341f427080f921cf73f79511a5e,PodSandboxId:e5d1e057659072c426613dd13be4020719795fd80058713725128846afa1efa6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423861858858935,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f588add554ab298cca0792048dbecc,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef36a1b668794db6a93b0fe6bd77304d10713cdade43b0c9b0a510a7dbdc4be,PodSandboxId:5c024524b750820d2645d2961242be1ae5272f476743045a565fc695c34eedb2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423861720398173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb335c77d5fb1581ea23fa0adf419e9,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c6dde8defdd0400b45f64646615de59c63d354f3aefa4ce2b8b549f04106d9,PodSandboxId:80324346e908f30a2ca2c8a54540dfcbbd6fec35a0785a566a089d5f82792324,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423861599865607,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b970951c1b4ca2bc525afa7c2eb2fef,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 46ec173a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=acb9bcc3-1352-4e42-889c-7d0df322f3d0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.318704131Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c06d6342-354e-4cda-8e6f-e1f67a8ef407 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.318782928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c06d6342-354e-4cda-8e6f-e1f67a8ef407 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.324889912Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=793bcdbd-84b0-44d0-96b4-e0bfa6003dba name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.325583407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424090325563244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=793bcdbd-84b0-44d0-96b4-e0bfa6003dba name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.326520910Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0feaded1-8f26-4d96-96dd-038d85fbacdd name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.326657475Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0feaded1-8f26-4d96-96dd-038d85fbacdd name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.326996052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bae88a78ebe3dee248c17f6d00943f51c4c4a482759d1effc885a8f1c364f7da,PodSandboxId:6b590e5c037a914d38bc67e1e0ab4df6388726f0bd5700dde95cf3396cb87f11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423899783314591,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb4f186a-9bb9-488f-8a74-6e01f352fc05,},Annotations:map[string]string{io.kubernetes.container.hash: 181c10d3,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8bf85d3341da7b64f58d8af0a6922244e5859445b43c27a9ca48228cb9c12c,PodSandboxId:4f3017cf9387179fddeedd476a49eda42e0aaecb265e9d236555995c97644bef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702423879829569652,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4vnmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00d42ae1-e3c5-461d-9019-b5609191598e,},Annotations:map[string]string{io.kubernetes.container.hash: a8d1b557,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5521be98891ba84419793936c52e9aeead9036c5083ec0e13681ca2d099f62,PodSandboxId:1ddbee372f5bb92000429ddb9662c4093ba89eefb9ba78a12e611888aea2b214,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423876027934477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcxks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503de693-19d6-45c5-97c6-3b8e5657bfee,},Annotations:map[string]string{io.kubernetes.container.hash: ced1e245,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff81a9793cfbf54fb97f6a304d83f531e6d40b79a58007929b347734a437c36c,PodSandboxId:1a1a4581ab6fce9a70ac23f6c277791499710b0871e3fb5c376273ce8a72db84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702423871144885706,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v4js8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cfe24f85-472c-4ef2-9a48-9e3647cc8feb,},Annotations:map[string]string{io.kubernetes.container.hash: a0875e0f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914450b1d9b01f7117b9650eedbdf645d9a82d19515c259fcd5cb7d797532c06,PodSandboxId:6b590e5c037a914d38bc67e1e0ab4df6388726f0bd5700dde95cf3396cb87f11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423868580885437,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: cb4f186a-9bb9-488f-8a74-6e01f352fc05,},Annotations:map[string]string{io.kubernetes.container.hash: 181c10d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3123c4e232d7f0faa04b6438f062448e708a44701c771b97de3270116f14d817,PodSandboxId:40aa5f0684fdd3ae2494c62637358fff7780ac40dae4347391ca56a529788b58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423868530592930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hspw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2255be6-8705-40cd-8f35-a3e82906
190c,},Annotations:map[string]string{io.kubernetes.container.hash: d38d9edd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2728448b2edcc87ade50f2949703d0390920cde4dfff230c1e0825d6de6ac51,PodSandboxId:4c78ee6e9d0ddac7aa542bdb829e72f7180a33c6282679b7f26cb4a9c5a9409a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423862085550676,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99da52f53b721a1a612acc1bca02d501,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 52ffbf68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17c982c3dbaaf94f8207b5e97045e94777f9c341f427080f921cf73f79511a5e,PodSandboxId:e5d1e057659072c426613dd13be4020719795fd80058713725128846afa1efa6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423861858858935,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f588add554ab298cca0792048dbecc,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef36a1b668794db6a93b0fe6bd77304d10713cdade43b0c9b0a510a7dbdc4be,PodSandboxId:5c024524b750820d2645d2961242be1ae5272f476743045a565fc695c34eedb2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423861720398173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb335c77d5fb1581ea23fa0adf419e9,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c6dde8defdd0400b45f64646615de59c63d354f3aefa4ce2b8b549f04106d9,PodSandboxId:80324346e908f30a2ca2c8a54540dfcbbd6fec35a0785a566a089d5f82792324,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423861599865607,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b970951c1b4ca2bc525afa7c2eb2fef,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 46ec173a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0feaded1-8f26-4d96-96dd-038d85fbacdd name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.376964731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4a3814d9-b828-4dd9-969b-96084ed3f384 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.377071271Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4a3814d9-b828-4dd9-969b-96084ed3f384 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.378686139Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b9def283-3106-4bd3-8efe-094bc1437db7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.379180532Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424090379166736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b9def283-3106-4bd3-8efe-094bc1437db7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.379727450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=537fc847-5b34-4afe-be83-04276ff70d93 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.379801522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=537fc847-5b34-4afe-be83-04276ff70d93 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:34:50 multinode-510563 crio[709]: time="2023-12-12 23:34:50.380055043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bae88a78ebe3dee248c17f6d00943f51c4c4a482759d1effc885a8f1c364f7da,PodSandboxId:6b590e5c037a914d38bc67e1e0ab4df6388726f0bd5700dde95cf3396cb87f11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423899783314591,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb4f186a-9bb9-488f-8a74-6e01f352fc05,},Annotations:map[string]string{io.kubernetes.container.hash: 181c10d3,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8bf85d3341da7b64f58d8af0a6922244e5859445b43c27a9ca48228cb9c12c,PodSandboxId:4f3017cf9387179fddeedd476a49eda42e0aaecb265e9d236555995c97644bef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702423879829569652,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4vnmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00d42ae1-e3c5-461d-9019-b5609191598e,},Annotations:map[string]string{io.kubernetes.container.hash: a8d1b557,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5521be98891ba84419793936c52e9aeead9036c5083ec0e13681ca2d099f62,PodSandboxId:1ddbee372f5bb92000429ddb9662c4093ba89eefb9ba78a12e611888aea2b214,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423876027934477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcxks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 503de693-19d6-45c5-97c6-3b8e5657bfee,},Annotations:map[string]string{io.kubernetes.container.hash: ced1e245,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff81a9793cfbf54fb97f6a304d83f531e6d40b79a58007929b347734a437c36c,PodSandboxId:1a1a4581ab6fce9a70ac23f6c277791499710b0871e3fb5c376273ce8a72db84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702423871144885706,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-v4js8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: cfe24f85-472c-4ef2-9a48-9e3647cc8feb,},Annotations:map[string]string{io.kubernetes.container.hash: a0875e0f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914450b1d9b01f7117b9650eedbdf645d9a82d19515c259fcd5cb7d797532c06,PodSandboxId:6b590e5c037a914d38bc67e1e0ab4df6388726f0bd5700dde95cf3396cb87f11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423868580885437,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: cb4f186a-9bb9-488f-8a74-6e01f352fc05,},Annotations:map[string]string{io.kubernetes.container.hash: 181c10d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3123c4e232d7f0faa04b6438f062448e708a44701c771b97de3270116f14d817,PodSandboxId:40aa5f0684fdd3ae2494c62637358fff7780ac40dae4347391ca56a529788b58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423868530592930,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hspw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2255be6-8705-40cd-8f35-a3e82906
190c,},Annotations:map[string]string{io.kubernetes.container.hash: d38d9edd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2728448b2edcc87ade50f2949703d0390920cde4dfff230c1e0825d6de6ac51,PodSandboxId:4c78ee6e9d0ddac7aa542bdb829e72f7180a33c6282679b7f26cb4a9c5a9409a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423862085550676,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99da52f53b721a1a612acc1bca02d501,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 52ffbf68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17c982c3dbaaf94f8207b5e97045e94777f9c341f427080f921cf73f79511a5e,PodSandboxId:e5d1e057659072c426613dd13be4020719795fd80058713725128846afa1efa6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423861858858935,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f588add554ab298cca0792048dbecc,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef36a1b668794db6a93b0fe6bd77304d10713cdade43b0c9b0a510a7dbdc4be,PodSandboxId:5c024524b750820d2645d2961242be1ae5272f476743045a565fc695c34eedb2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423861720398173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb335c77d5fb1581ea23fa0adf419e9,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c6dde8defdd0400b45f64646615de59c63d354f3aefa4ce2b8b549f04106d9,PodSandboxId:80324346e908f30a2ca2c8a54540dfcbbd6fec35a0785a566a089d5f82792324,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423861599865607,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-510563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b970951c1b4ca2bc525afa7c2eb2fef,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 46ec173a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=537fc847-5b34-4afe-be83-04276ff70d93 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bae88a78ebe3d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   6b590e5c037a9       storage-provisioner
	8c8bf85d3341d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   4f3017cf93871       busybox-5bc68d56bd-4vnmj
	7a5521be98891       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   1ddbee372f5bb       coredns-5dd5756b68-zcxks
	ff81a9793cfbf       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   1a1a4581ab6fc       kindnet-v4js8
	914450b1d9b01       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   6b590e5c037a9       storage-provisioner
	3123c4e232d7f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   40aa5f0684fdd       kube-proxy-hspw8
	d2728448b2edc       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   4c78ee6e9d0dd       etcd-multinode-510563
	17c982c3dbaaf       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   e5d1e05765907       kube-controller-manager-multinode-510563
	eef36a1b66879       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   5c024524b7508       kube-scheduler-multinode-510563
	28c6dde8defdd       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   80324346e908f       kube-apiserver-multinode-510563
	
	* 
	* ==> coredns [7a5521be98891ba84419793936c52e9aeead9036c5083ec0e13681ca2d099f62] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46311 - 45891 "HINFO IN 133115336349445231.1338128561936503111. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011357901s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-510563
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-510563
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=multinode-510563
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_20_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:20:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-510563
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:34:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:31:37 +0000   Tue, 12 Dec 2023 23:20:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:31:37 +0000   Tue, 12 Dec 2023 23:20:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:31:37 +0000   Tue, 12 Dec 2023 23:20:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:31:37 +0000   Tue, 12 Dec 2023 23:31:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    multinode-510563
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ea9003964b849fbada5a3ef7b0b44a7
	  System UUID:                4ea90039-64b8-49fb-ada5-a3ef7b0b44a7
	  Boot ID:                    0df1eef7-4a8d-497b-8917-0cc82d64cf5f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-4vnmj                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-zcxks                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-510563                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-v4js8                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-510563             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-510563    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-hspw8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-510563             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m41s                  kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-510563 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-510563 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-510563 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-510563 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-510563 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-510563 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           14m                    node-controller  Node multinode-510563 event: Registered Node multinode-510563 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-510563 status is now: NodeReady
	  Normal  Starting                 3m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node multinode-510563 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node multinode-510563 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node multinode-510563 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m31s                  node-controller  Node multinode-510563 event: Registered Node multinode-510563 in Controller
	
	
	Name:               multinode-510563-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-510563-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=multinode-510563
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T23_34_45_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:33:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-510563-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:34:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:33:02 +0000   Tue, 12 Dec 2023 23:33:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:33:02 +0000   Tue, 12 Dec 2023 23:33:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:33:02 +0000   Tue, 12 Dec 2023 23:33:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:33:02 +0000   Tue, 12 Dec 2023 23:33:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    multinode-510563-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 01ff35da603346c58e26ad58a3d3ca74
	  System UUID:                01ff35da-6033-46c5-8e26-ad58a3d3ca74
	  Boot ID:                    bd39a86f-efc2-4469-ad3a-603d6cbd436e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wb8v9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-5v7sf               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-msx8s            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 106s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-510563-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-510563-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-510563-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-510563-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m52s                  kubelet     Node multinode-510563-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m20s (x2 over 3m20s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       111s                   kubelet     Node multinode-510563-m02 status is now: NodeNotSchedulable
	  Normal   Starting                 109s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  108s (x2 over 108s)    kubelet     Node multinode-510563-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    108s (x2 over 108s)    kubelet     Node multinode-510563-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     108s (x2 over 108s)    kubelet     Node multinode-510563-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  108s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                108s                   kubelet     Node multinode-510563-m02 status is now: NodeReady
	
	
	Name:               multinode-510563-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-510563-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=multinode-510563
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T23_34_45_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:34:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-510563-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:34:45 +0000   Tue, 12 Dec 2023 23:34:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:34:45 +0000   Tue, 12 Dec 2023 23:34:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:34:45 +0000   Tue, 12 Dec 2023 23:34:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:34:45 +0000   Tue, 12 Dec 2023 23:34:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    multinode-510563-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f812c01a2b094f3ba849f3bd4831b0ee
	  System UUID:                f812c01a-2b09-4f3b-a849-f3bd4831b0ee
	  Boot ID:                    b385fcd9-c2fa-4ce9-b0a8-ddf396692b1a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-5hvf4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kindnet-lqdxw               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-fbk65            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From        Message
	  ----     ------                   ----                ----        -------
	  Normal   Starting                 11m                 kube-proxy  
	  Normal   Starting                 12m                 kube-proxy  
	  Normal   Starting                 3s                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet     Node multinode-510563-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet     Node multinode-510563-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet     Node multinode-510563-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                 kubelet     Node multinode-510563-m03 status is now: NodeReady
	  Normal   Starting                 11m                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet     Node multinode-510563-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet     Node multinode-510563-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet     Node multinode-510563-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                 kubelet     Node multinode-510563-m03 status is now: NodeReady
	  Normal   NodeNotReady             70s                 kubelet     Node multinode-510563-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        40s (x2 over 100s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                  kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)     kubelet     Node multinode-510563-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)     kubelet     Node multinode-510563-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                  kubelet     Node multinode-510563-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)     kubelet     Node multinode-510563-m03 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [Dec12 23:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068398] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.346217] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.297008] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.138249] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.704410] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.053123] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.111235] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.154012] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.105914] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.206643] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[Dec12 23:31] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [d2728448b2edcc87ade50f2949703d0390920cde4dfff230c1e0825d6de6ac51] <==
	* {"level":"info","ts":"2023-12-12T23:31:03.665746Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:31:03.665777Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:31:03.666035Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da switched to configuration voters=(4085449137511063770)"}
	{"level":"info","ts":"2023-12-12T23:31:03.666159Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"afb1a6a08b4dab74","local-member-id":"38b26e584d45e0da","added-peer-id":"38b26e584d45e0da","added-peer-peer-urls":["https://192.168.39.38:2380"]}
	{"level":"info","ts":"2023-12-12T23:31:03.666341Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"afb1a6a08b4dab74","local-member-id":"38b26e584d45e0da","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:31:03.666389Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:31:03.696952Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T23:31:03.700175Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"38b26e584d45e0da","initial-advertise-peer-urls":["https://192.168.39.38:2380"],"listen-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.38:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T23:31:03.70842Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2023-12-12T23:31:03.708474Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2023-12-12T23:31:03.709981Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T23:31:05.414623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T23:31:05.414662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:31:05.414695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgPreVoteResp from 38b26e584d45e0da at term 2"}
	{"level":"info","ts":"2023-12-12T23:31:05.414709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T23:31:05.414714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgVoteResp from 38b26e584d45e0da at term 3"}
	{"level":"info","ts":"2023-12-12T23:31:05.414723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became leader at term 3"}
	{"level":"info","ts":"2023-12-12T23:31:05.414733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 38b26e584d45e0da elected leader 38b26e584d45e0da at term 3"}
	{"level":"info","ts":"2023-12-12T23:31:05.417969Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:31:05.41902Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:31:05.417916Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"38b26e584d45e0da","local-member-attributes":"{Name:multinode-510563 ClientURLs:[https://192.168.39.38:2379]}","request-path":"/0/members/38b26e584d45e0da/attributes","cluster-id":"afb1a6a08b4dab74","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:31:05.419578Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:31:05.420582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.38:2379"}
	{"level":"info","ts":"2023-12-12T23:31:05.420663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:31:05.42069Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  23:34:50 up 4 min,  0 users,  load average: 0.60, 0.28, 0.12
	Linux multinode-510563 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [ff81a9793cfbf54fb97f6a304d83f531e6d40b79a58007929b347734a437c36c] <==
	* I1212 23:34:02.963024       1 main.go:250] Node multinode-510563-m03 has CIDR [10.244.3.0/24] 
	I1212 23:34:12.973929       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1212 23:34:12.973984       1 main.go:227] handling current node
	I1212 23:34:12.973996       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I1212 23:34:12.974002       1 main.go:250] Node multinode-510563-m02 has CIDR [10.244.1.0/24] 
	I1212 23:34:12.974098       1 main.go:223] Handling node with IPs: map[192.168.39.133:{}]
	I1212 23:34:12.974134       1 main.go:250] Node multinode-510563-m03 has CIDR [10.244.3.0/24] 
	I1212 23:34:22.988475       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1212 23:34:22.988520       1 main.go:227] handling current node
	I1212 23:34:22.988530       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I1212 23:34:22.988536       1 main.go:250] Node multinode-510563-m02 has CIDR [10.244.1.0/24] 
	I1212 23:34:22.988632       1 main.go:223] Handling node with IPs: map[192.168.39.133:{}]
	I1212 23:34:22.988664       1 main.go:250] Node multinode-510563-m03 has CIDR [10.244.3.0/24] 
	I1212 23:34:32.999304       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1212 23:34:32.999350       1 main.go:227] handling current node
	I1212 23:34:32.999361       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I1212 23:34:32.999367       1 main.go:250] Node multinode-510563-m02 has CIDR [10.244.1.0/24] 
	I1212 23:34:32.999454       1 main.go:223] Handling node with IPs: map[192.168.39.133:{}]
	I1212 23:34:32.999485       1 main.go:250] Node multinode-510563-m03 has CIDR [10.244.3.0/24] 
	I1212 23:34:43.011544       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I1212 23:34:43.011593       1 main.go:227] handling current node
	I1212 23:34:43.011614       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I1212 23:34:43.011620       1 main.go:250] Node multinode-510563-m02 has CIDR [10.244.1.0/24] 
	I1212 23:34:43.011733       1 main.go:223] Handling node with IPs: map[192.168.39.133:{}]
	I1212 23:34:43.011768       1 main.go:250] Node multinode-510563-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [28c6dde8defdd0400b45f64646615de59c63d354f3aefa4ce2b8b549f04106d9] <==
	* I1212 23:31:06.874091       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1212 23:31:06.874147       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1212 23:31:06.875152       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1212 23:31:06.875342       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 23:31:07.035991       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:31:07.045488       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 23:31:07.071861       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 23:31:07.071958       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 23:31:07.071999       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:31:07.072653       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 23:31:07.072767       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 23:31:07.073874       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 23:31:07.074376       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:31:07.074503       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:31:07.074540       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:31:07.074569       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:31:07.074598       1 cache.go:39] Caches are synced for autoregister controller
	E1212 23:31:07.093691       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 23:31:07.887759       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:31:09.821480       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:31:09.978285       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:31:09.993934       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:31:10.061775       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:31:10.071767       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:31:57.327671       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [17c982c3dbaaf94f8207b5e97045e94777f9c341f427080f921cf73f79511a5e] <==
	* I1212 23:33:02.142566       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-510563-m02\" does not exist"
	I1212 23:33:02.142629       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-6hjc6" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-6hjc6"
	I1212 23:33:02.155728       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-510563-m02" podCIDRs=["10.244.1.0/24"]
	I1212 23:33:02.403141       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-510563-m02"
	I1212 23:33:03.057913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="91.835µs"
	I1212 23:33:03.266579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.617366ms"
	I1212 23:33:03.266932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="59.779µs"
	I1212 23:33:16.309914       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="101.528µs"
	I1212 23:33:16.915693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="51.355µs"
	I1212 23:33:16.918660       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.833µs"
	I1212 23:33:40.022256       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-510563-m02"
	I1212 23:34:41.646170       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-wb8v9"
	I1212 23:34:41.665596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="30.168747ms"
	I1212 23:34:41.688605       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.940491ms"
	I1212 23:34:41.688810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="146.388µs"
	I1212 23:34:41.689003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="91.611µs"
	I1212 23:34:43.181712       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.247166ms"
	I1212 23:34:43.182418       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.594µs"
	I1212 23:34:44.659381       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-510563-m02"
	I1212 23:34:45.316565       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-510563-m03\" does not exist"
	I1212 23:34:45.317554       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-510563-m02"
	I1212 23:34:45.317800       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-5hvf4" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-5hvf4"
	I1212 23:34:45.332864       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-510563-m03" podCIDRs=["10.244.2.0/24"]
	I1212 23:34:45.660780       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-510563-m02"
	I1212 23:34:46.232095       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="74.626µs"
	
	* 
	* ==> kube-proxy [3123c4e232d7f0faa04b6438f062448e708a44701c771b97de3270116f14d817] <==
	* I1212 23:31:08.847331       1 server_others.go:69] "Using iptables proxy"
	I1212 23:31:08.890887       1 node.go:141] Successfully retrieved node IP: 192.168.39.38
	I1212 23:31:09.343619       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:31:09.343700       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:31:09.352589       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:31:09.352695       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:31:09.352951       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:31:09.353011       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:31:09.355521       1 config.go:188] "Starting service config controller"
	I1212 23:31:09.355898       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:31:09.356128       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:31:09.356160       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:31:09.356901       1 config.go:315] "Starting node config controller"
	I1212 23:31:09.358547       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:31:09.457296       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:31:09.457369       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 23:31:09.463594       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [eef36a1b668794db6a93b0fe6bd77304d10713cdade43b0c9b0a510a7dbdc4be] <==
	* I1212 23:31:03.865169       1 serving.go:348] Generated self-signed cert in-memory
	W1212 23:31:06.981407       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 23:31:06.981562       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:31:06.981649       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 23:31:06.981657       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 23:31:07.037780       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 23:31:07.037833       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:31:07.042691       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 23:31:07.043059       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 23:31:07.043141       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:31:07.043176       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 23:31:07.144364       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:30:34 UTC, ends at Tue 2023-12-12 23:34:51 UTC. --
	Dec 12 23:31:09 multinode-510563 kubelet[916]: E1212 23:31:09.144103     916 projected.go:198] Error preparing data for projected volume kube-api-access-jjwzt for pod default/busybox-5bc68d56bd-4vnmj: object "default"/"kube-root-ca.crt" not registered
	Dec 12 23:31:09 multinode-510563 kubelet[916]: E1212 23:31:09.144157     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00d42ae1-e3c5-461d-9019-b5609191598e-kube-api-access-jjwzt podName:00d42ae1-e3c5-461d-9019-b5609191598e nodeName:}" failed. No retries permitted until 2023-12-12 23:31:11.144142903 +0000 UTC m=+10.848696013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jjwzt" (UniqueName: "kubernetes.io/projected/00d42ae1-e3c5-461d-9019-b5609191598e-kube-api-access-jjwzt") pod "busybox-5bc68d56bd-4vnmj" (UID: "00d42ae1-e3c5-461d-9019-b5609191598e") : object "default"/"kube-root-ca.crt" not registered
	Dec 12 23:31:09 multinode-510563 kubelet[916]: E1212 23:31:09.579718     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-zcxks" podUID="503de693-19d6-45c5-97c6-3b8e5657bfee"
	Dec 12 23:31:09 multinode-510563 kubelet[916]: E1212 23:31:09.580150     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-4vnmj" podUID="00d42ae1-e3c5-461d-9019-b5609191598e"
	Dec 12 23:31:11 multinode-510563 kubelet[916]: E1212 23:31:11.059465     916 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 23:31:11 multinode-510563 kubelet[916]: E1212 23:31:11.059556     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/503de693-19d6-45c5-97c6-3b8e5657bfee-config-volume podName:503de693-19d6-45c5-97c6-3b8e5657bfee nodeName:}" failed. No retries permitted until 2023-12-12 23:31:15.059541231 +0000 UTC m=+14.764094354 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/503de693-19d6-45c5-97c6-3b8e5657bfee-config-volume") pod "coredns-5dd5756b68-zcxks" (UID: "503de693-19d6-45c5-97c6-3b8e5657bfee") : object "kube-system"/"coredns" not registered
	Dec 12 23:31:11 multinode-510563 kubelet[916]: E1212 23:31:11.160716     916 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 12 23:31:11 multinode-510563 kubelet[916]: E1212 23:31:11.160879     916 projected.go:198] Error preparing data for projected volume kube-api-access-jjwzt for pod default/busybox-5bc68d56bd-4vnmj: object "default"/"kube-root-ca.crt" not registered
	Dec 12 23:31:11 multinode-510563 kubelet[916]: E1212 23:31:11.160974     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00d42ae1-e3c5-461d-9019-b5609191598e-kube-api-access-jjwzt podName:00d42ae1-e3c5-461d-9019-b5609191598e nodeName:}" failed. No retries permitted until 2023-12-12 23:31:15.160957044 +0000 UTC m=+14.865510165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jjwzt" (UniqueName: "kubernetes.io/projected/00d42ae1-e3c5-461d-9019-b5609191598e-kube-api-access-jjwzt") pod "busybox-5bc68d56bd-4vnmj" (UID: "00d42ae1-e3c5-461d-9019-b5609191598e") : object "default"/"kube-root-ca.crt" not registered
	Dec 12 23:31:11 multinode-510563 kubelet[916]: E1212 23:31:11.580104     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-4vnmj" podUID="00d42ae1-e3c5-461d-9019-b5609191598e"
	Dec 12 23:31:11 multinode-510563 kubelet[916]: E1212 23:31:11.580388     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-zcxks" podUID="503de693-19d6-45c5-97c6-3b8e5657bfee"
	Dec 12 23:31:13 multinode-510563 kubelet[916]: I1212 23:31:13.088162     916 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 23:31:39 multinode-510563 kubelet[916]: I1212 23:31:39.756155     916 scope.go:117] "RemoveContainer" containerID="914450b1d9b01f7117b9650eedbdf645d9a82d19515c259fcd5cb7d797532c06"
	Dec 12 23:32:00 multinode-510563 kubelet[916]: E1212 23:32:00.599277     916 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:32:00 multinode-510563 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:32:00 multinode-510563 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:32:00 multinode-510563 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:33:00 multinode-510563 kubelet[916]: E1212 23:33:00.599776     916 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:33:00 multinode-510563 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:33:00 multinode-510563 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:33:00 multinode-510563 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:34:00 multinode-510563 kubelet[916]: E1212 23:34:00.599417     916 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:34:00 multinode-510563 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:34:00 multinode-510563 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:34:00 multinode-510563 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-510563 -n multinode-510563
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-510563 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (688.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 stop
E1212 23:35:11.805075  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-510563 stop: exit status 82 (2m1.307730183s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-510563"  ...
	* Stopping node "multinode-510563"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-510563 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-510563 status: exit status 3 (18.611629345s)

                                                
                                                
-- stdout --
	multinode-510563
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-510563-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:37:13.540844  162489 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	E1212 23:37:13.540891  162489 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-510563 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-510563 -n multinode-510563
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-510563 -n multinode-510563: exit status 3 (3.190854082s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:37:16.900791  162589 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	E1212 23:37:16.900810  162589 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-510563" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.11s)

                                                
                                    
x
+
TestPreload (265.55s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-324893 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1212 23:47:30.663180  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:47:45.320328  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-324893 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m51.766097289s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-324893 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-324893 image pull gcr.io/k8s-minikube/busybox: (2.911511827s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-324893
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-324893: (7.100843326s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-324893 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1212 23:49:27.616644  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-324893 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m20.613818634s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-324893 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:523: *** TestPreload FAILED at 2023-12-12 23:50:06.519498063 +0000 UTC m=+3330.075635615
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-324893 -n test-preload-324893
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-324893 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-324893 logs -n 25: (1.152647463s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-510563 ssh -n                                                                 | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n multinode-510563 sudo cat                                       | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | /home/docker/cp-test_multinode-510563-m03_multinode-510563.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-510563 cp multinode-510563-m03:/home/docker/cp-test.txt                       | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m02:/home/docker/cp-test_multinode-510563-m03_multinode-510563-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n                                                                 | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | multinode-510563-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-510563 ssh -n multinode-510563-m02 sudo cat                                   | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	|         | /home/docker/cp-test_multinode-510563-m03_multinode-510563-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-510563 node stop m03                                                          | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:22 UTC |
	| node    | multinode-510563 node start                                                             | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:22 UTC | 12 Dec 23 23:23 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-510563                                                                | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:23 UTC |                     |
	| stop    | -p multinode-510563                                                                     | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:23 UTC |                     |
	| start   | -p multinode-510563                                                                     | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:25 UTC | 12 Dec 23 23:34 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-510563                                                                | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:34 UTC |                     |
	| node    | multinode-510563 node delete                                                            | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:34 UTC | 12 Dec 23 23:34 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-510563 stop                                                                   | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:34 UTC |                     |
	| start   | -p multinode-510563                                                                     | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:37 UTC | 12 Dec 23 23:44 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-510563                                                                | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:44 UTC |                     |
	| start   | -p multinode-510563-m02                                                                 | multinode-510563-m02 | jenkins | v1.32.0 | 12 Dec 23 23:44 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-510563-m03                                                                 | multinode-510563-m03 | jenkins | v1.32.0 | 12 Dec 23 23:44 UTC | 12 Dec 23 23:45 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-510563                                                                 | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:45 UTC |                     |
	| delete  | -p multinode-510563-m03                                                                 | multinode-510563-m03 | jenkins | v1.32.0 | 12 Dec 23 23:45 UTC | 12 Dec 23 23:45 UTC |
	| delete  | -p multinode-510563                                                                     | multinode-510563     | jenkins | v1.32.0 | 12 Dec 23 23:45 UTC | 12 Dec 23 23:45 UTC |
	| start   | -p test-preload-324893                                                                  | test-preload-324893  | jenkins | v1.32.0 | 12 Dec 23 23:45 UTC | 12 Dec 23 23:48 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-324893 image pull                                                          | test-preload-324893  | jenkins | v1.32.0 | 12 Dec 23 23:48 UTC | 12 Dec 23 23:48 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-324893                                                                  | test-preload-324893  | jenkins | v1.32.0 | 12 Dec 23 23:48 UTC | 12 Dec 23 23:48 UTC |
	| start   | -p test-preload-324893                                                                  | test-preload-324893  | jenkins | v1.32.0 | 12 Dec 23 23:48 UTC | 12 Dec 23 23:50 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-324893 image list                                                          | test-preload-324893  | jenkins | v1.32.0 | 12 Dec 23 23:50 UTC | 12 Dec 23 23:50 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:48:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:48:45.715919  165449 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:48:45.716069  165449 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:48:45.716078  165449 out.go:309] Setting ErrFile to fd 2...
	I1212 23:48:45.716083  165449 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:48:45.716278  165449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1212 23:48:45.717001  165449 out.go:303] Setting JSON to false
	I1212 23:48:45.717911  165449 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9074,"bootTime":1702415852,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:48:45.717979  165449 start.go:138] virtualization: kvm guest
	I1212 23:48:45.720246  165449 out.go:177] * [test-preload-324893] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:48:45.721676  165449 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 23:48:45.722834  165449 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:48:45.721697  165449 notify.go:220] Checking for updates...
	I1212 23:48:45.725374  165449 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:48:45.726788  165449 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:48:45.728145  165449 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:48:45.729527  165449 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:48:45.731180  165449 config.go:182] Loaded profile config "test-preload-324893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1212 23:48:45.731607  165449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:48:45.731664  165449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:48:45.745592  165449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42357
	I1212 23:48:45.745953  165449 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:48:45.746476  165449 main.go:141] libmachine: Using API Version  1
	I1212 23:48:45.746499  165449 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:48:45.746814  165449 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:48:45.746986  165449 main.go:141] libmachine: (test-preload-324893) Calling .DriverName
	I1212 23:48:45.748689  165449 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1212 23:48:45.750118  165449 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:48:45.750428  165449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:48:45.750472  165449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:48:45.764093  165449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I1212 23:48:45.764450  165449 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:48:45.765040  165449 main.go:141] libmachine: Using API Version  1
	I1212 23:48:45.765062  165449 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:48:45.765395  165449 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:48:45.765578  165449 main.go:141] libmachine: (test-preload-324893) Calling .DriverName
	I1212 23:48:45.798154  165449 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 23:48:45.799558  165449 start.go:298] selected driver: kvm2
	I1212 23:48:45.799572  165449 start.go:902] validating driver "kvm2" against &{Name:test-preload-324893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-324893 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:48:45.799688  165449 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:48:45.800405  165449 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:48:45.800518  165449 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:48:45.814708  165449 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:48:45.815043  165449 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:48:45.815111  165449 cni.go:84] Creating CNI manager for ""
	I1212 23:48:45.815132  165449 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:48:45.815149  165449 start_flags.go:323] config:
	{Name:test-preload-324893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-324893 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:48:45.815341  165449 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:48:45.817272  165449 out.go:177] * Starting control plane node test-preload-324893 in cluster test-preload-324893
	I1212 23:48:45.818771  165449 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1212 23:48:45.945130  165449 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1212 23:48:45.945179  165449 cache.go:56] Caching tarball of preloaded images
	I1212 23:48:45.945345  165449 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1212 23:48:45.947292  165449 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1212 23:48:45.948875  165449 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 23:48:46.072865  165449 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1212 23:48:59.181073  165449 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 23:48:59.181168  165449 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 23:49:00.075753  165449 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I1212 23:49:00.075927  165449 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/config.json ...
	I1212 23:49:00.076186  165449 start.go:365] acquiring machines lock for test-preload-324893: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:49:00.076265  165449 start.go:369] acquired machines lock for "test-preload-324893" in 51.693µs
	I1212 23:49:00.076282  165449 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:49:00.076293  165449 fix.go:54] fixHost starting: 
	I1212 23:49:00.076733  165449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:49:00.076786  165449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:49:00.091002  165449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37841
	I1212 23:49:00.091442  165449 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:49:00.091868  165449 main.go:141] libmachine: Using API Version  1
	I1212 23:49:00.091893  165449 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:49:00.092235  165449 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:49:00.092443  165449 main.go:141] libmachine: (test-preload-324893) Calling .DriverName
	I1212 23:49:00.092655  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetState
	I1212 23:49:00.094390  165449 fix.go:102] recreateIfNeeded on test-preload-324893: state=Stopped err=<nil>
	I1212 23:49:00.094412  165449 main.go:141] libmachine: (test-preload-324893) Calling .DriverName
	W1212 23:49:00.094568  165449 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:49:00.096905  165449 out.go:177] * Restarting existing kvm2 VM for "test-preload-324893" ...
	I1212 23:49:00.098600  165449 main.go:141] libmachine: (test-preload-324893) Calling .Start
	I1212 23:49:00.098784  165449 main.go:141] libmachine: (test-preload-324893) Ensuring networks are active...
	I1212 23:49:00.099463  165449 main.go:141] libmachine: (test-preload-324893) Ensuring network default is active
	I1212 23:49:00.099736  165449 main.go:141] libmachine: (test-preload-324893) Ensuring network mk-test-preload-324893 is active
	I1212 23:49:00.100256  165449 main.go:141] libmachine: (test-preload-324893) Getting domain xml...
	I1212 23:49:00.100941  165449 main.go:141] libmachine: (test-preload-324893) Creating domain...
	I1212 23:49:01.301775  165449 main.go:141] libmachine: (test-preload-324893) Waiting to get IP...
	I1212 23:49:01.302638  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:01.303072  165449 main.go:141] libmachine: (test-preload-324893) DBG | unable to find current IP address of domain test-preload-324893 in network mk-test-preload-324893
	I1212 23:49:01.303166  165449 main.go:141] libmachine: (test-preload-324893) DBG | I1212 23:49:01.303069  165507 retry.go:31] will retry after 304.039479ms: waiting for machine to come up
	I1212 23:49:01.608617  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:01.609067  165449 main.go:141] libmachine: (test-preload-324893) DBG | unable to find current IP address of domain test-preload-324893 in network mk-test-preload-324893
	I1212 23:49:01.609091  165449 main.go:141] libmachine: (test-preload-324893) DBG | I1212 23:49:01.609035  165507 retry.go:31] will retry after 277.02582ms: waiting for machine to come up
	I1212 23:49:01.887761  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:01.888190  165449 main.go:141] libmachine: (test-preload-324893) DBG | unable to find current IP address of domain test-preload-324893 in network mk-test-preload-324893
	I1212 23:49:01.888212  165449 main.go:141] libmachine: (test-preload-324893) DBG | I1212 23:49:01.888151  165507 retry.go:31] will retry after 460.763622ms: waiting for machine to come up
	I1212 23:49:02.350592  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:02.350978  165449 main.go:141] libmachine: (test-preload-324893) DBG | unable to find current IP address of domain test-preload-324893 in network mk-test-preload-324893
	I1212 23:49:02.351006  165449 main.go:141] libmachine: (test-preload-324893) DBG | I1212 23:49:02.350940  165507 retry.go:31] will retry after 372.133417ms: waiting for machine to come up
	I1212 23:49:02.724586  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:02.725042  165449 main.go:141] libmachine: (test-preload-324893) DBG | unable to find current IP address of domain test-preload-324893 in network mk-test-preload-324893
	I1212 23:49:02.725082  165449 main.go:141] libmachine: (test-preload-324893) DBG | I1212 23:49:02.724994  165507 retry.go:31] will retry after 640.108498ms: waiting for machine to come up
	I1212 23:49:03.366704  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:03.367122  165449 main.go:141] libmachine: (test-preload-324893) DBG | unable to find current IP address of domain test-preload-324893 in network mk-test-preload-324893
	I1212 23:49:03.367189  165449 main.go:141] libmachine: (test-preload-324893) DBG | I1212 23:49:03.367103  165507 retry.go:31] will retry after 776.639576ms: waiting for machine to come up
	I1212 23:49:04.144991  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:04.145370  165449 main.go:141] libmachine: (test-preload-324893) DBG | unable to find current IP address of domain test-preload-324893 in network mk-test-preload-324893
	I1212 23:49:04.145404  165449 main.go:141] libmachine: (test-preload-324893) DBG | I1212 23:49:04.145326  165507 retry.go:31] will retry after 1.093539725s: waiting for machine to come up
	I1212 23:49:05.240979  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:05.241431  165449 main.go:141] libmachine: (test-preload-324893) DBG | unable to find current IP address of domain test-preload-324893 in network mk-test-preload-324893
	I1212 23:49:05.241460  165449 main.go:141] libmachine: (test-preload-324893) DBG | I1212 23:49:05.241394  165507 retry.go:31] will retry after 989.004142ms: waiting for machine to come up
	I1212 23:49:06.231479  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:06.231853  165449 main.go:141] libmachine: (test-preload-324893) DBG | unable to find current IP address of domain test-preload-324893 in network mk-test-preload-324893
	I1212 23:49:06.231940  165449 main.go:141] libmachine: (test-preload-324893) DBG | I1212 23:49:06.231871  165507 retry.go:31] will retry after 1.367544995s: waiting for machine to come up
	I1212 23:49:07.600524  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:07.600877  165449 main.go:141] libmachine: (test-preload-324893) DBG | unable to find current IP address of domain test-preload-324893 in network mk-test-preload-324893
	I1212 23:49:07.600907  165449 main.go:141] libmachine: (test-preload-324893) DBG | I1212 23:49:07.600820  165507 retry.go:31] will retry after 1.623286942s: waiting for machine to come up
	I1212 23:49:09.226810  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:09.227296  165449 main.go:141] libmachine: (test-preload-324893) DBG | unable to find current IP address of domain test-preload-324893 in network mk-test-preload-324893
	I1212 23:49:09.227324  165449 main.go:141] libmachine: (test-preload-324893) DBG | I1212 23:49:09.227245  165507 retry.go:31] will retry after 2.032701701s: waiting for machine to come up
	I1212 23:49:11.261420  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:11.261835  165449 main.go:141] libmachine: (test-preload-324893) DBG | unable to find current IP address of domain test-preload-324893 in network mk-test-preload-324893
	I1212 23:49:11.261863  165449 main.go:141] libmachine: (test-preload-324893) DBG | I1212 23:49:11.261773  165507 retry.go:31] will retry after 3.586710538s: waiting for machine to come up
	I1212 23:49:14.852524  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:14.852985  165449 main.go:141] libmachine: (test-preload-324893) DBG | unable to find current IP address of domain test-preload-324893 in network mk-test-preload-324893
	I1212 23:49:14.853009  165449 main.go:141] libmachine: (test-preload-324893) DBG | I1212 23:49:14.852937  165507 retry.go:31] will retry after 3.433356429s: waiting for machine to come up
	I1212 23:49:18.287483  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.287961  165449 main.go:141] libmachine: (test-preload-324893) Found IP for machine: 192.168.39.69
	I1212 23:49:18.287987  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has current primary IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.287998  165449 main.go:141] libmachine: (test-preload-324893) Reserving static IP address...
	I1212 23:49:18.288555  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "test-preload-324893", mac: "52:54:00:2c:0d:fb", ip: "192.168.39.69"} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:18.288600  165449 main.go:141] libmachine: (test-preload-324893) DBG | skip adding static IP to network mk-test-preload-324893 - found existing host DHCP lease matching {name: "test-preload-324893", mac: "52:54:00:2c:0d:fb", ip: "192.168.39.69"}
	I1212 23:49:18.288611  165449 main.go:141] libmachine: (test-preload-324893) Reserved static IP address: 192.168.39.69
	I1212 23:49:18.288628  165449 main.go:141] libmachine: (test-preload-324893) Waiting for SSH to be available...
	I1212 23:49:18.288652  165449 main.go:141] libmachine: (test-preload-324893) DBG | Getting to WaitForSSH function...
	I1212 23:49:18.290733  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.291076  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:18.291107  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.291219  165449 main.go:141] libmachine: (test-preload-324893) DBG | Using SSH client type: external
	I1212 23:49:18.291242  165449 main.go:141] libmachine: (test-preload-324893) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/test-preload-324893/id_rsa (-rw-------)
	I1212 23:49:18.291262  165449 main.go:141] libmachine: (test-preload-324893) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/test-preload-324893/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:49:18.291271  165449 main.go:141] libmachine: (test-preload-324893) DBG | About to run SSH command:
	I1212 23:49:18.291283  165449 main.go:141] libmachine: (test-preload-324893) DBG | exit 0
	I1212 23:49:18.384503  165449 main.go:141] libmachine: (test-preload-324893) DBG | SSH cmd err, output: <nil>: 
	I1212 23:49:18.384986  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetConfigRaw
	I1212 23:49:18.385592  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetIP
	I1212 23:49:18.388100  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.388490  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:18.388525  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.388772  165449 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/config.json ...
	I1212 23:49:18.389043  165449 machine.go:88] provisioning docker machine ...
	I1212 23:49:18.389067  165449 main.go:141] libmachine: (test-preload-324893) Calling .DriverName
	I1212 23:49:18.389293  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetMachineName
	I1212 23:49:18.389464  165449 buildroot.go:166] provisioning hostname "test-preload-324893"
	I1212 23:49:18.389482  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetMachineName
	I1212 23:49:18.389634  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHHostname
	I1212 23:49:18.391720  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.392114  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:18.392154  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.392291  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHPort
	I1212 23:49:18.392522  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:18.392677  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:18.392838  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHUsername
	I1212 23:49:18.393026  165449 main.go:141] libmachine: Using SSH client type: native
	I1212 23:49:18.393355  165449 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1212 23:49:18.393368  165449 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-324893 && echo "test-preload-324893" | sudo tee /etc/hostname
	I1212 23:49:18.530672  165449 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-324893
	
	I1212 23:49:18.530712  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHHostname
	I1212 23:49:18.533224  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.533569  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:18.533594  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.533701  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHPort
	I1212 23:49:18.533882  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:18.534037  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:18.534179  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHUsername
	I1212 23:49:18.534328  165449 main.go:141] libmachine: Using SSH client type: native
	I1212 23:49:18.534655  165449 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1212 23:49:18.534688  165449 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-324893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-324893/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-324893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:49:18.665341  165449 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:49:18.665379  165449 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1212 23:49:18.665404  165449 buildroot.go:174] setting up certificates
	I1212 23:49:18.665417  165449 provision.go:83] configureAuth start
	I1212 23:49:18.665436  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetMachineName
	I1212 23:49:18.665711  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetIP
	I1212 23:49:18.668619  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.669027  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:18.669062  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.669234  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHHostname
	I1212 23:49:18.671231  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.671504  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:18.671550  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.671661  165449 provision.go:138] copyHostCerts
	I1212 23:49:18.671719  165449 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1212 23:49:18.671742  165449 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:49:18.671824  165449 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1212 23:49:18.671989  165449 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1212 23:49:18.672008  165449 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:49:18.672050  165449 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1212 23:49:18.672129  165449 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1212 23:49:18.672140  165449 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:49:18.672173  165449 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1212 23:49:18.672234  165449 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.test-preload-324893 san=[192.168.39.69 192.168.39.69 localhost 127.0.0.1 minikube test-preload-324893]
	I1212 23:49:18.950172  165449 provision.go:172] copyRemoteCerts
	I1212 23:49:18.950245  165449 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:49:18.950277  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHHostname
	I1212 23:49:18.953359  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.953706  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:18.953744  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:18.953868  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHPort
	I1212 23:49:18.954081  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:18.954294  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHUsername
	I1212 23:49:18.954450  165449 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/test-preload-324893/id_rsa Username:docker}
	I1212 23:49:19.045721  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:49:19.069513  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 23:49:19.092781  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:49:19.115715  165449 provision.go:86] duration metric: configureAuth took 450.281336ms
	I1212 23:49:19.115741  165449 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:49:19.115902  165449 config.go:182] Loaded profile config "test-preload-324893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1212 23:49:19.115981  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHHostname
	I1212 23:49:19.118793  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:19.119186  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:19.119230  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:19.119393  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHPort
	I1212 23:49:19.119593  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:19.119804  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:19.119991  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHUsername
	I1212 23:49:19.120130  165449 main.go:141] libmachine: Using SSH client type: native
	I1212 23:49:19.120543  165449 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1212 23:49:19.120562  165449 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:49:19.446062  165449 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:49:19.446100  165449 machine.go:91] provisioned docker machine in 1.057040379s
	I1212 23:49:19.446111  165449 start.go:300] post-start starting for "test-preload-324893" (driver="kvm2")
	I1212 23:49:19.446121  165449 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:49:19.446141  165449 main.go:141] libmachine: (test-preload-324893) Calling .DriverName
	I1212 23:49:19.446463  165449 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:49:19.446485  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHHostname
	I1212 23:49:19.449393  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:19.449746  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:19.449777  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:19.449914  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHPort
	I1212 23:49:19.450139  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:19.450318  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHUsername
	I1212 23:49:19.450484  165449 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/test-preload-324893/id_rsa Username:docker}
	I1212 23:49:19.542994  165449 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:49:19.547151  165449 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:49:19.547185  165449 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1212 23:49:19.547289  165449 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1212 23:49:19.547428  165449 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1212 23:49:19.547542  165449 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:49:19.556625  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:49:19.578508  165449 start.go:303] post-start completed in 132.380626ms
	I1212 23:49:19.578531  165449 fix.go:56] fixHost completed within 19.502239859s
	I1212 23:49:19.578550  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHHostname
	I1212 23:49:19.580877  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:19.581192  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:19.581223  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:19.581348  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHPort
	I1212 23:49:19.581561  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:19.581702  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:19.581915  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHUsername
	I1212 23:49:19.582140  165449 main.go:141] libmachine: Using SSH client type: native
	I1212 23:49:19.582456  165449 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1212 23:49:19.582469  165449 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:49:19.705813  165449 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702424959.654740997
	
	I1212 23:49:19.705836  165449 fix.go:206] guest clock: 1702424959.654740997
	I1212 23:49:19.705843  165449 fix.go:219] Guest: 2023-12-12 23:49:19.654740997 +0000 UTC Remote: 2023-12-12 23:49:19.578534833 +0000 UTC m=+33.912999015 (delta=76.206164ms)
	I1212 23:49:19.705891  165449 fix.go:190] guest clock delta is within tolerance: 76.206164ms
	I1212 23:49:19.705898  165449 start.go:83] releasing machines lock for "test-preload-324893", held for 19.629621322s
	I1212 23:49:19.705925  165449 main.go:141] libmachine: (test-preload-324893) Calling .DriverName
	I1212 23:49:19.706232  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetIP
	I1212 23:49:19.708764  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:19.709048  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:19.709082  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:19.709180  165449 main.go:141] libmachine: (test-preload-324893) Calling .DriverName
	I1212 23:49:19.709720  165449 main.go:141] libmachine: (test-preload-324893) Calling .DriverName
	I1212 23:49:19.709903  165449 main.go:141] libmachine: (test-preload-324893) Calling .DriverName
	I1212 23:49:19.710018  165449 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:49:19.710072  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHHostname
	I1212 23:49:19.710122  165449 ssh_runner.go:195] Run: cat /version.json
	I1212 23:49:19.710148  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHHostname
	I1212 23:49:19.712876  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:19.713174  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:19.713238  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:19.713264  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:19.713399  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:19.713424  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:19.713430  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHPort
	I1212 23:49:19.713593  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHPort
	I1212 23:49:19.713678  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:19.713755  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:19.713868  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHUsername
	I1212 23:49:19.713948  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHUsername
	I1212 23:49:19.714155  165449 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/test-preload-324893/id_rsa Username:docker}
	I1212 23:49:19.714156  165449 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/test-preload-324893/id_rsa Username:docker}
	I1212 23:49:19.801304  165449 ssh_runner.go:195] Run: systemctl --version
	I1212 23:49:19.823560  165449 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:49:19.967273  165449 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:49:19.976326  165449 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:49:19.976409  165449 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:49:19.991346  165449 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:49:19.991373  165449 start.go:475] detecting cgroup driver to use...
	I1212 23:49:19.991447  165449 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:49:20.005425  165449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:49:20.017669  165449 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:49:20.017735  165449 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:49:20.030073  165449 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:49:20.042085  165449 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:49:20.150040  165449 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:49:20.268490  165449 docker.go:219] disabling docker service ...
	I1212 23:49:20.268579  165449 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:49:20.280956  165449 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:49:20.292614  165449 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:49:20.393803  165449 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:49:20.492743  165449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:49:20.504822  165449 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:49:20.521781  165449 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1212 23:49:20.521854  165449 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:49:20.530789  165449 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:49:20.530856  165449 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:49:20.539698  165449 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:49:20.548549  165449 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:49:20.557254  165449 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:49:20.566283  165449 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:49:20.574155  165449 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:49:20.574211  165449 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:49:20.586836  165449 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:49:20.595527  165449 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:49:20.699882  165449 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:49:20.866933  165449 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:49:20.867001  165449 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:49:20.872200  165449 start.go:543] Will wait 60s for crictl version
	I1212 23:49:20.872250  165449 ssh_runner.go:195] Run: which crictl
	I1212 23:49:20.876112  165449 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:49:20.915259  165449 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:49:20.915344  165449 ssh_runner.go:195] Run: crio --version
	I1212 23:49:20.963259  165449 ssh_runner.go:195] Run: crio --version
	I1212 23:49:21.009524  165449 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I1212 23:49:21.010879  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetIP
	I1212 23:49:21.013715  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:21.014088  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:21.014117  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:21.014322  165449 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 23:49:21.018578  165449 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:49:21.031454  165449 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1212 23:49:21.031518  165449 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:49:21.070654  165449 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1212 23:49:21.070712  165449 ssh_runner.go:195] Run: which lz4
	I1212 23:49:21.074957  165449 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:49:21.079122  165449 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:49:21.079150  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1212 23:49:22.856495  165449 crio.go:444] Took 1.781562 seconds to copy over tarball
	I1212 23:49:22.856565  165449 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:49:25.928653  165449 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.072063397s)
	I1212 23:49:25.928680  165449 crio.go:451] Took 3.072162 seconds to extract the tarball
	I1212 23:49:25.928688  165449 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:49:25.969992  165449 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:49:26.018520  165449 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1212 23:49:26.018545  165449 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 23:49:26.018614  165449 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:49:26.018627  165449 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1212 23:49:26.018653  165449 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 23:49:26.018672  165449 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1212 23:49:26.018725  165449 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1212 23:49:26.018761  165449 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1212 23:49:26.018790  165449 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1212 23:49:26.018836  165449 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1212 23:49:26.019960  165449 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1212 23:49:26.019980  165449 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1212 23:49:26.019958  165449 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1212 23:49:26.019958  165449 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:49:26.020009  165449 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1212 23:49:26.019962  165449 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 23:49:26.019959  165449 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1212 23:49:26.020126  165449 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1212 23:49:26.161228  165449 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1212 23:49:26.162770  165449 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1212 23:49:26.167980  165449 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1212 23:49:26.171097  165449 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1212 23:49:26.177156  165449 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 23:49:26.181395  165449 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1212 23:49:26.219659  165449 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1212 23:49:26.253112  165449 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1212 23:49:26.253159  165449 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1212 23:49:26.253206  165449 ssh_runner.go:195] Run: which crictl
	I1212 23:49:26.292922  165449 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1212 23:49:26.292962  165449 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1212 23:49:26.293016  165449 ssh_runner.go:195] Run: which crictl
	I1212 23:49:26.293503  165449 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1212 23:49:26.293550  165449 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1212 23:49:26.293595  165449 ssh_runner.go:195] Run: which crictl
	I1212 23:49:26.335162  165449 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1212 23:49:26.335216  165449 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1212 23:49:26.335267  165449 ssh_runner.go:195] Run: which crictl
	I1212 23:49:26.342619  165449 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1212 23:49:26.342660  165449 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 23:49:26.342708  165449 ssh_runner.go:195] Run: which crictl
	I1212 23:49:26.346974  165449 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1212 23:49:26.347006  165449 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1212 23:49:26.347025  165449 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1212 23:49:26.347028  165449 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1212 23:49:26.347064  165449 ssh_runner.go:195] Run: which crictl
	I1212 23:49:26.347070  165449 ssh_runner.go:195] Run: which crictl
	I1212 23:49:26.347114  165449 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1212 23:49:26.347158  165449 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1212 23:49:26.347224  165449 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1212 23:49:26.347268  165449 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1212 23:49:26.349343  165449 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 23:49:26.447706  165449 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1212 23:49:26.447710  165449 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1212 23:49:26.447827  165449 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1212 23:49:26.447843  165449 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I1212 23:49:26.451812  165449 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1212 23:49:26.451860  165449 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1212 23:49:26.451898  165449 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1212 23:49:26.471157  165449 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1212 23:49:26.471239  165449 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I1212 23:49:26.471253  165449 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1212 23:49:26.475025  165449 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1212 23:49:26.475081  165449 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1212 23:49:26.475093  165449 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1212 23:49:26.475098  165449 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1212 23:49:26.475122  165449 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1212 23:49:26.475181  165449 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1212 23:49:26.543617  165449 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1212 23:49:26.543649  165449 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1212 23:49:26.543721  165449 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I1212 23:49:26.543759  165449 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1212 23:49:26.543792  165449 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1212 23:49:26.543850  165449 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1212 23:49:26.908682  165449 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:49:28.372510  165449 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (1.89736572s)
	I1212 23:49:28.372538  165449 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1212 23:49:28.372539  165449 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (1.897420586s)
	I1212 23:49:28.372561  165449 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1212 23:49:28.372569  165449 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1212 23:49:28.372627  165449 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1212 23:49:28.372641  165449 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (1.82876765s)
	I1212 23:49:28.372665  165449 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1212 23:49:28.372702  165449 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (1.828964745s)
	I1212 23:49:28.372726  165449 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1212 23:49:28.372759  165449 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.464047768s)
	I1212 23:49:28.717838  165449 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1212 23:49:28.717884  165449 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1212 23:49:28.717949  165449 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1212 23:49:29.459667  165449 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1212 23:49:29.459716  165449 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I1212 23:49:29.459762  165449 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1212 23:49:29.605443  165449 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1212 23:49:29.605492  165449 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1212 23:49:29.605556  165449 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1212 23:49:30.350237  165449 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1212 23:49:30.350289  165449 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1212 23:49:30.350346  165449 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1212 23:49:31.196653  165449 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1212 23:49:31.196701  165449 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1212 23:49:31.196742  165449 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1212 23:49:33.243255  165449 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.046488163s)
	I1212 23:49:33.243288  165449 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1212 23:49:33.243318  165449 cache_images.go:123] Successfully loaded all cached images
	I1212 23:49:33.243323  165449 cache_images.go:92] LoadImages completed in 7.224767848s
	I1212 23:49:33.243402  165449 ssh_runner.go:195] Run: crio config
	I1212 23:49:33.300373  165449 cni.go:84] Creating CNI manager for ""
	I1212 23:49:33.300403  165449 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:49:33.300452  165449 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:49:33.300480  165449 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-324893 NodeName:test-preload-324893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:49:33.300684  165449 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-324893"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:49:33.300789  165449 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-324893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-324893 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:49:33.300852  165449 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1212 23:49:33.309513  165449 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:49:33.309606  165449 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:49:33.317914  165449 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:49:33.333770  165449 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:49:33.349626  165449 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1212 23:49:33.366136  165449 ssh_runner.go:195] Run: grep 192.168.39.69	control-plane.minikube.internal$ /etc/hosts
	I1212 23:49:33.370034  165449 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:49:33.382138  165449 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893 for IP: 192.168.39.69
	I1212 23:49:33.382169  165449 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:49:33.382341  165449 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1212 23:49:33.382391  165449 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1212 23:49:33.382467  165449 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/client.key
	I1212 23:49:33.382526  165449 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/apiserver.key.0f35f48b
	I1212 23:49:33.382594  165449 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/proxy-client.key
	I1212 23:49:33.382706  165449 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1212 23:49:33.382747  165449 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1212 23:49:33.382757  165449 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:49:33.382780  165449 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:49:33.382804  165449 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:49:33.382828  165449 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1212 23:49:33.382867  165449 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:49:33.383512  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:49:33.406893  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:49:33.430021  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:49:33.452814  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:49:33.475514  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:49:33.498231  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:49:33.520879  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:49:33.542218  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:49:33.564460  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:49:33.586964  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1212 23:49:33.609244  165449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1212 23:49:33.631543  165449 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:49:33.647022  165449 ssh_runner.go:195] Run: openssl version
	I1212 23:49:33.652273  165449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:49:33.661155  165449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:49:33.665519  165449 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:49:33.665573  165449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:49:33.670820  165449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:49:33.680897  165449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1212 23:49:33.690043  165449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1212 23:49:33.694379  165449 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1212 23:49:33.694424  165449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1212 23:49:33.699643  165449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1212 23:49:33.708853  165449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1212 23:49:33.718425  165449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1212 23:49:33.722906  165449 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1212 23:49:33.722966  165449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1212 23:49:33.728313  165449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:49:33.737497  165449 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:49:33.741966  165449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:49:33.747570  165449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:49:33.753087  165449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:49:33.758650  165449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:49:33.764057  165449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:49:33.769582  165449 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:49:33.775224  165449 kubeadm.go:404] StartCluster: {Name:test-preload-324893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-324893 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:49:33.775302  165449 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:49:33.775350  165449 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:49:33.810960  165449 cri.go:89] found id: ""
	I1212 23:49:33.811045  165449 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:49:33.819910  165449 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:49:33.819930  165449 kubeadm.go:636] restartCluster start
	I1212 23:49:33.819985  165449 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:49:33.828081  165449 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:33.828538  165449 kubeconfig.go:135] verify returned: extract IP: "test-preload-324893" does not appear in /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:49:33.828672  165449 kubeconfig.go:146] "test-preload-324893" context is missing from /home/jenkins/minikube-integration/17777-136241/kubeconfig - will repair!
	I1212 23:49:33.828995  165449 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:49:33.829559  165449 kapi.go:59] client config for test-preload-324893: &rest.Config{Host:"https://192.168.39.69:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:49:33.830403  165449 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:49:33.838238  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:33.838281  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:33.848450  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:33.848463  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:33.848508  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:33.858028  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:34.358808  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:34.358905  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:34.370029  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:34.858694  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:34.858784  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:34.869963  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:35.359060  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:35.359134  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:35.371317  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:35.858167  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:35.858283  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:35.870454  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:36.358319  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:36.358400  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:36.372170  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:36.858781  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:36.858888  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:36.870952  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:37.358471  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:37.358569  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:37.371240  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:37.858868  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:37.858936  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:37.872460  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:38.359000  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:38.359073  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:38.371524  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:38.859139  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:38.859226  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:38.872994  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:39.358612  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:39.358688  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:39.371223  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:39.858900  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:39.859007  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:39.871183  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:40.358223  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:40.358289  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:40.370505  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:40.858104  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:40.858190  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:40.870206  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:41.358804  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:41.358875  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:41.371025  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:41.858548  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:41.858651  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:41.870718  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:42.358305  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:42.358389  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:42.370583  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:42.858136  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:42.858248  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:42.871316  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:43.358916  165449 api_server.go:166] Checking apiserver status ...
	I1212 23:49:43.358992  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:49:43.370655  165449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:49:43.838287  165449 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:49:43.838330  165449 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:49:43.838342  165449 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:49:43.838396  165449 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:49:43.875601  165449 cri.go:89] found id: ""
	I1212 23:49:43.875677  165449 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:49:43.890753  165449 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:49:43.899269  165449 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:49:43.899317  165449 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:49:43.907731  165449 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:49:43.907758  165449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:49:44.030375  165449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:49:44.675891  165449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:49:45.043493  165449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:49:45.120077  165449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:49:45.207610  165449 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:49:45.207704  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:49:45.256676  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:49:45.784179  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:49:46.284335  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:49:46.784161  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:49:47.284226  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:49:47.309152  165449 api_server.go:72] duration metric: took 2.101537213s to wait for apiserver process to appear ...
	I1212 23:49:47.309196  165449 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:49:47.309217  165449 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1212 23:49:47.309809  165449 api_server.go:269] stopped: https://192.168.39.69:8443/healthz: Get "https://192.168.39.69:8443/healthz": dial tcp 192.168.39.69:8443: connect: connection refused
	I1212 23:49:47.309843  165449 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1212 23:49:47.310198  165449 api_server.go:269] stopped: https://192.168.39.69:8443/healthz: Get "https://192.168.39.69:8443/healthz": dial tcp 192.168.39.69:8443: connect: connection refused
	I1212 23:49:47.810915  165449 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1212 23:49:52.313368  165449 api_server.go:279] https://192.168.39.69:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:49:52.313403  165449 api_server.go:103] status: https://192.168.39.69:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:49:52.313418  165449 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1212 23:49:52.438726  165449 api_server.go:279] https://192.168.39.69:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 23:49:52.438759  165449 api_server.go:103] status: https://192.168.39.69:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 23:49:52.811260  165449 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1212 23:49:52.816754  165449 api_server.go:279] https://192.168.39.69:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 23:49:52.816780  165449 api_server.go:103] status: https://192.168.39.69:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 23:49:53.310357  165449 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1212 23:49:53.316707  165449 api_server.go:279] https://192.168.39.69:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 23:49:53.316735  165449 api_server.go:103] status: https://192.168.39.69:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 23:49:53.811341  165449 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1212 23:49:53.819077  165449 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I1212 23:49:53.828795  165449 api_server.go:141] control plane version: v1.24.4
	I1212 23:49:53.828823  165449 api_server.go:131] duration metric: took 6.519620245s to wait for apiserver health ...
	I1212 23:49:53.828832  165449 cni.go:84] Creating CNI manager for ""
	I1212 23:49:53.828838  165449 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:49:53.830738  165449 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:49:53.832136  165449 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:49:53.865504  165449 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:49:53.885294  165449 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:49:53.894419  165449 system_pods.go:59] 7 kube-system pods found
	I1212 23:49:53.894465  165449 system_pods.go:61] "coredns-6d4b75cb6d-kwcm4" [0cbe91ce-99d8-472a-81b0-b47cf434b399] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 23:49:53.894475  165449 system_pods.go:61] "etcd-test-preload-324893" [64e80021-f7f7-498a-9f1e-060e3db59f57] Running
	I1212 23:49:53.894484  165449 system_pods.go:61] "kube-apiserver-test-preload-324893" [f3d7e701-6761-4a82-85bf-eaec95155226] Running
	I1212 23:49:53.894493  165449 system_pods.go:61] "kube-controller-manager-test-preload-324893" [83d9c653-91e4-4fc2-a1f6-72bf78b29a39] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 23:49:53.894506  165449 system_pods.go:61] "kube-proxy-tm5bf" [3a5f4273-ba3f-4cd5-a836-7748d629f49d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 23:49:53.894515  165449 system_pods.go:61] "kube-scheduler-test-preload-324893" [302072e1-1966-465b-8a90-30e423c41fac] Running
	I1212 23:49:53.894534  165449 system_pods.go:61] "storage-provisioner" [a821573f-0652-4889-94b0-d64aa606975a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 23:49:53.894545  165449 system_pods.go:74] duration metric: took 9.231002ms to wait for pod list to return data ...
	I1212 23:49:53.894562  165449 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:49:53.901655  165449 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:49:53.901685  165449 node_conditions.go:123] node cpu capacity is 2
	I1212 23:49:53.901699  165449 node_conditions.go:105] duration metric: took 7.130034ms to run NodePressure ...
	I1212 23:49:53.901723  165449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:49:54.141838  165449 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:49:54.145734  165449 kubeadm.go:787] kubelet initialised
	I1212 23:49:54.145758  165449 kubeadm.go:788] duration metric: took 3.881194ms waiting for restarted kubelet to initialise ...
	I1212 23:49:54.145765  165449 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:49:54.154205  165449 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-kwcm4" in "kube-system" namespace to be "Ready" ...
	I1212 23:49:54.161284  165449 pod_ready.go:97] node "test-preload-324893" hosting pod "coredns-6d4b75cb6d-kwcm4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-324893" has status "Ready":"False"
	I1212 23:49:54.161305  165449 pod_ready.go:81] duration metric: took 7.080569ms waiting for pod "coredns-6d4b75cb6d-kwcm4" in "kube-system" namespace to be "Ready" ...
	E1212 23:49:54.161313  165449 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-324893" hosting pod "coredns-6d4b75cb6d-kwcm4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-324893" has status "Ready":"False"
	I1212 23:49:54.161319  165449 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	I1212 23:49:54.167732  165449 pod_ready.go:97] node "test-preload-324893" hosting pod "etcd-test-preload-324893" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-324893" has status "Ready":"False"
	I1212 23:49:54.167749  165449 pod_ready.go:81] duration metric: took 6.420208ms waiting for pod "etcd-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	E1212 23:49:54.167757  165449 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-324893" hosting pod "etcd-test-preload-324893" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-324893" has status "Ready":"False"
	I1212 23:49:54.167762  165449 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	I1212 23:49:54.178762  165449 pod_ready.go:97] node "test-preload-324893" hosting pod "kube-apiserver-test-preload-324893" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-324893" has status "Ready":"False"
	I1212 23:49:54.178782  165449 pod_ready.go:81] duration metric: took 11.012684ms waiting for pod "kube-apiserver-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	E1212 23:49:54.178790  165449 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-324893" hosting pod "kube-apiserver-test-preload-324893" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-324893" has status "Ready":"False"
	I1212 23:49:54.178798  165449 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	I1212 23:49:54.289790  165449 pod_ready.go:97] node "test-preload-324893" hosting pod "kube-controller-manager-test-preload-324893" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-324893" has status "Ready":"False"
	I1212 23:49:54.289819  165449 pod_ready.go:81] duration metric: took 111.014594ms waiting for pod "kube-controller-manager-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	E1212 23:49:54.289829  165449 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-324893" hosting pod "kube-controller-manager-test-preload-324893" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-324893" has status "Ready":"False"
	I1212 23:49:54.289835  165449 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tm5bf" in "kube-system" namespace to be "Ready" ...
	I1212 23:49:54.688510  165449 pod_ready.go:97] node "test-preload-324893" hosting pod "kube-proxy-tm5bf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-324893" has status "Ready":"False"
	I1212 23:49:54.688542  165449 pod_ready.go:81] duration metric: took 398.69731ms waiting for pod "kube-proxy-tm5bf" in "kube-system" namespace to be "Ready" ...
	E1212 23:49:54.688555  165449 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-324893" hosting pod "kube-proxy-tm5bf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-324893" has status "Ready":"False"
	I1212 23:49:54.688564  165449 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	I1212 23:49:55.090383  165449 pod_ready.go:97] node "test-preload-324893" hosting pod "kube-scheduler-test-preload-324893" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-324893" has status "Ready":"False"
	I1212 23:49:55.090406  165449 pod_ready.go:81] duration metric: took 401.835257ms waiting for pod "kube-scheduler-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	E1212 23:49:55.090416  165449 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-324893" hosting pod "kube-scheduler-test-preload-324893" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-324893" has status "Ready":"False"
	I1212 23:49:55.090424  165449 pod_ready.go:38] duration metric: took 944.652152ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:49:55.090441  165449 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:49:55.101376  165449 ops.go:34] apiserver oom_adj: -16
	I1212 23:49:55.101402  165449 kubeadm.go:640] restartCluster took 21.281465185s
	I1212 23:49:55.101411  165449 kubeadm.go:406] StartCluster complete in 21.326190804s
	I1212 23:49:55.101437  165449 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:49:55.101520  165449 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:49:55.102260  165449 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:49:55.102476  165449 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:49:55.102620  165449 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:49:55.102717  165449 addons.go:69] Setting storage-provisioner=true in profile "test-preload-324893"
	I1212 23:49:55.102736  165449 addons.go:69] Setting default-storageclass=true in profile "test-preload-324893"
	I1212 23:49:55.102754  165449 addons.go:231] Setting addon storage-provisioner=true in "test-preload-324893"
	W1212 23:49:55.102763  165449 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:49:55.102765  165449 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-324893"
	I1212 23:49:55.102767  165449 config.go:182] Loaded profile config "test-preload-324893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1212 23:49:55.102820  165449 host.go:66] Checking if "test-preload-324893" exists ...
	I1212 23:49:55.103115  165449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:49:55.103123  165449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:49:55.103074  165449 kapi.go:59] client config for test-preload-324893: &rest.Config{Host:"https://192.168.39.69:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:49:55.103150  165449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:49:55.103225  165449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:49:55.106876  165449 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-324893" context rescaled to 1 replicas
	I1212 23:49:55.106915  165449 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:49:55.108899  165449 out.go:177] * Verifying Kubernetes components...
	I1212 23:49:55.110388  165449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:49:55.118690  165449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I1212 23:49:55.119134  165449 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:49:55.119641  165449 main.go:141] libmachine: Using API Version  1
	I1212 23:49:55.119667  165449 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:49:55.120070  165449 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:49:55.120273  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetState
	I1212 23:49:55.120761  165449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44785
	I1212 23:49:55.121163  165449 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:49:55.121643  165449 main.go:141] libmachine: Using API Version  1
	I1212 23:49:55.121672  165449 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:49:55.122024  165449 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:49:55.122558  165449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:49:55.122609  165449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:49:55.122814  165449 kapi.go:59] client config for test-preload-324893: &rest.Config{Host:"https://192.168.39.69:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/client.crt", KeyFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/profiles/test-preload-324893/client.key", CAFile:"/home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 23:49:55.123164  165449 addons.go:231] Setting addon default-storageclass=true in "test-preload-324893"
	W1212 23:49:55.123184  165449 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:49:55.123211  165449 host.go:66] Checking if "test-preload-324893" exists ...
	I1212 23:49:55.123614  165449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:49:55.123654  165449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:49:55.136969  165449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40251
	I1212 23:49:55.137354  165449 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:49:55.137754  165449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42205
	I1212 23:49:55.137845  165449 main.go:141] libmachine: Using API Version  1
	I1212 23:49:55.137871  165449 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:49:55.138156  165449 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:49:55.138214  165449 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:49:55.138410  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetState
	I1212 23:49:55.138600  165449 main.go:141] libmachine: Using API Version  1
	I1212 23:49:55.138626  165449 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:49:55.138949  165449 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:49:55.139504  165449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:49:55.139551  165449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:49:55.140079  165449 main.go:141] libmachine: (test-preload-324893) Calling .DriverName
	I1212 23:49:55.142007  165449 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:49:55.143345  165449 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:49:55.143360  165449 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:49:55.143379  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHHostname
	I1212 23:49:55.146676  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:55.147157  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:55.147202  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:55.147335  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHPort
	I1212 23:49:55.147549  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:55.147733  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHUsername
	I1212 23:49:55.147904  165449 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/test-preload-324893/id_rsa Username:docker}
	I1212 23:49:55.155969  165449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37233
	I1212 23:49:55.156355  165449 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:49:55.156946  165449 main.go:141] libmachine: Using API Version  1
	I1212 23:49:55.156991  165449 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:49:55.157376  165449 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:49:55.157570  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetState
	I1212 23:49:55.159310  165449 main.go:141] libmachine: (test-preload-324893) Calling .DriverName
	I1212 23:49:55.159587  165449 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:49:55.159609  165449 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:49:55.159626  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHHostname
	I1212 23:49:55.162717  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:55.163199  165449 main.go:141] libmachine: (test-preload-324893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:0d:fb", ip: ""} in network mk-test-preload-324893: {Iface:virbr1 ExpiryTime:2023-12-13 00:49:12 +0000 UTC Type:0 Mac:52:54:00:2c:0d:fb Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:test-preload-324893 Clientid:01:52:54:00:2c:0d:fb}
	I1212 23:49:55.163230  165449 main.go:141] libmachine: (test-preload-324893) DBG | domain test-preload-324893 has defined IP address 192.168.39.69 and MAC address 52:54:00:2c:0d:fb in network mk-test-preload-324893
	I1212 23:49:55.163375  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHPort
	I1212 23:49:55.163543  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHKeyPath
	I1212 23:49:55.163710  165449 main.go:141] libmachine: (test-preload-324893) Calling .GetSSHUsername
	I1212 23:49:55.163886  165449 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/test-preload-324893/id_rsa Username:docker}
	I1212 23:49:55.328243  165449 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 23:49:55.328313  165449 node_ready.go:35] waiting up to 6m0s for node "test-preload-324893" to be "Ready" ...
	I1212 23:49:55.351277  165449 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:49:55.352996  165449 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:49:56.276540  165449 main.go:141] libmachine: Making call to close driver server
	I1212 23:49:56.276565  165449 main.go:141] libmachine: (test-preload-324893) Calling .Close
	I1212 23:49:56.276599  165449 main.go:141] libmachine: Making call to close driver server
	I1212 23:49:56.276622  165449 main.go:141] libmachine: (test-preload-324893) Calling .Close
	I1212 23:49:56.276891  165449 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:49:56.276907  165449 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:49:56.276916  165449 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:49:56.276921  165449 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:49:56.276925  165449 main.go:141] libmachine: Making call to close driver server
	I1212 23:49:56.276930  165449 main.go:141] libmachine: Making call to close driver server
	I1212 23:49:56.276950  165449 main.go:141] libmachine: (test-preload-324893) Calling .Close
	I1212 23:49:56.276929  165449 main.go:141] libmachine: (test-preload-324893) DBG | Closing plugin on server side
	I1212 23:49:56.276933  165449 main.go:141] libmachine: (test-preload-324893) Calling .Close
	I1212 23:49:56.277141  165449 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:49:56.277153  165449 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:49:56.277165  165449 main.go:141] libmachine: (test-preload-324893) DBG | Closing plugin on server side
	I1212 23:49:56.277202  165449 main.go:141] libmachine: (test-preload-324893) DBG | Closing plugin on server side
	I1212 23:49:56.277247  165449 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:49:56.277258  165449 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:49:56.286135  165449 main.go:141] libmachine: Making call to close driver server
	I1212 23:49:56.286155  165449 main.go:141] libmachine: (test-preload-324893) Calling .Close
	I1212 23:49:56.286423  165449 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:49:56.286438  165449 main.go:141] libmachine: (test-preload-324893) DBG | Closing plugin on server side
	I1212 23:49:56.286442  165449 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:49:56.288247  165449 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 23:49:56.289558  165449 addons.go:502] enable addons completed in 1.186946523s: enabled=[storage-provisioner default-storageclass]
	I1212 23:49:57.494820  165449 node_ready.go:58] node "test-preload-324893" has status "Ready":"False"
	I1212 23:49:59.993560  165449 node_ready.go:58] node "test-preload-324893" has status "Ready":"False"
	I1212 23:50:02.494610  165449 node_ready.go:58] node "test-preload-324893" has status "Ready":"False"
	I1212 23:50:02.993516  165449 node_ready.go:49] node "test-preload-324893" has status "Ready":"True"
	I1212 23:50:02.993541  165449 node_ready.go:38] duration metric: took 7.665202241s waiting for node "test-preload-324893" to be "Ready" ...
	I1212 23:50:02.993549  165449 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:50:02.998463  165449 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-kwcm4" in "kube-system" namespace to be "Ready" ...
	I1212 23:50:03.003573  165449 pod_ready.go:92] pod "coredns-6d4b75cb6d-kwcm4" in "kube-system" namespace has status "Ready":"True"
	I1212 23:50:03.003590  165449 pod_ready.go:81] duration metric: took 5.107492ms waiting for pod "coredns-6d4b75cb6d-kwcm4" in "kube-system" namespace to be "Ready" ...
	I1212 23:50:03.003598  165449 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	I1212 23:50:03.007819  165449 pod_ready.go:92] pod "etcd-test-preload-324893" in "kube-system" namespace has status "Ready":"True"
	I1212 23:50:03.007835  165449 pod_ready.go:81] duration metric: took 4.231215ms waiting for pod "etcd-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	I1212 23:50:03.007843  165449 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	I1212 23:50:04.525340  165449 pod_ready.go:92] pod "kube-apiserver-test-preload-324893" in "kube-system" namespace has status "Ready":"True"
	I1212 23:50:04.525389  165449 pod_ready.go:81] duration metric: took 1.517534404s waiting for pod "kube-apiserver-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	I1212 23:50:04.525406  165449 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	I1212 23:50:04.593448  165449 pod_ready.go:92] pod "kube-controller-manager-test-preload-324893" in "kube-system" namespace has status "Ready":"True"
	I1212 23:50:04.593483  165449 pod_ready.go:81] duration metric: took 68.065983ms waiting for pod "kube-controller-manager-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	I1212 23:50:04.593498  165449 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tm5bf" in "kube-system" namespace to be "Ready" ...
	I1212 23:50:04.994974  165449 pod_ready.go:92] pod "kube-proxy-tm5bf" in "kube-system" namespace has status "Ready":"True"
	I1212 23:50:04.994995  165449 pod_ready.go:81] duration metric: took 401.488684ms waiting for pod "kube-proxy-tm5bf" in "kube-system" namespace to be "Ready" ...
	I1212 23:50:04.995005  165449 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	I1212 23:50:05.394060  165449 pod_ready.go:92] pod "kube-scheduler-test-preload-324893" in "kube-system" namespace has status "Ready":"True"
	I1212 23:50:05.394089  165449 pod_ready.go:81] duration metric: took 399.076373ms waiting for pod "kube-scheduler-test-preload-324893" in "kube-system" namespace to be "Ready" ...
	I1212 23:50:05.394103  165449 pod_ready.go:38] duration metric: took 2.400544775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:50:05.394122  165449 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:50:05.394185  165449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:50:05.407978  165449 api_server.go:72] duration metric: took 10.301023665s to wait for apiserver process to appear ...
	I1212 23:50:05.407999  165449 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:50:05.408015  165449 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1212 23:50:05.414201  165449 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I1212 23:50:05.415142  165449 api_server.go:141] control plane version: v1.24.4
	I1212 23:50:05.415170  165449 api_server.go:131] duration metric: took 7.164206ms to wait for apiserver health ...
	I1212 23:50:05.415179  165449 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:50:05.596531  165449 system_pods.go:59] 7 kube-system pods found
	I1212 23:50:05.596566  165449 system_pods.go:61] "coredns-6d4b75cb6d-kwcm4" [0cbe91ce-99d8-472a-81b0-b47cf434b399] Running
	I1212 23:50:05.596573  165449 system_pods.go:61] "etcd-test-preload-324893" [64e80021-f7f7-498a-9f1e-060e3db59f57] Running
	I1212 23:50:05.596579  165449 system_pods.go:61] "kube-apiserver-test-preload-324893" [f3d7e701-6761-4a82-85bf-eaec95155226] Running
	I1212 23:50:05.596585  165449 system_pods.go:61] "kube-controller-manager-test-preload-324893" [83d9c653-91e4-4fc2-a1f6-72bf78b29a39] Running
	I1212 23:50:05.596590  165449 system_pods.go:61] "kube-proxy-tm5bf" [3a5f4273-ba3f-4cd5-a836-7748d629f49d] Running
	I1212 23:50:05.596597  165449 system_pods.go:61] "kube-scheduler-test-preload-324893" [302072e1-1966-465b-8a90-30e423c41fac] Running
	I1212 23:50:05.596602  165449 system_pods.go:61] "storage-provisioner" [a821573f-0652-4889-94b0-d64aa606975a] Running
	I1212 23:50:05.596611  165449 system_pods.go:74] duration metric: took 181.423844ms to wait for pod list to return data ...
	I1212 23:50:05.596621  165449 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:50:05.793835  165449 default_sa.go:45] found service account: "default"
	I1212 23:50:05.793862  165449 default_sa.go:55] duration metric: took 197.228936ms for default service account to be created ...
	I1212 23:50:05.793871  165449 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:50:05.996169  165449 system_pods.go:86] 7 kube-system pods found
	I1212 23:50:05.996197  165449 system_pods.go:89] "coredns-6d4b75cb6d-kwcm4" [0cbe91ce-99d8-472a-81b0-b47cf434b399] Running
	I1212 23:50:05.996202  165449 system_pods.go:89] "etcd-test-preload-324893" [64e80021-f7f7-498a-9f1e-060e3db59f57] Running
	I1212 23:50:05.996206  165449 system_pods.go:89] "kube-apiserver-test-preload-324893" [f3d7e701-6761-4a82-85bf-eaec95155226] Running
	I1212 23:50:05.996210  165449 system_pods.go:89] "kube-controller-manager-test-preload-324893" [83d9c653-91e4-4fc2-a1f6-72bf78b29a39] Running
	I1212 23:50:05.996214  165449 system_pods.go:89] "kube-proxy-tm5bf" [3a5f4273-ba3f-4cd5-a836-7748d629f49d] Running
	I1212 23:50:05.996218  165449 system_pods.go:89] "kube-scheduler-test-preload-324893" [302072e1-1966-465b-8a90-30e423c41fac] Running
	I1212 23:50:05.996221  165449 system_pods.go:89] "storage-provisioner" [a821573f-0652-4889-94b0-d64aa606975a] Running
	I1212 23:50:05.996228  165449 system_pods.go:126] duration metric: took 202.351411ms to wait for k8s-apps to be running ...
	I1212 23:50:05.996234  165449 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:50:05.996276  165449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:50:06.010472  165449 system_svc.go:56] duration metric: took 14.226379ms WaitForService to wait for kubelet.
	I1212 23:50:06.010508  165449 kubeadm.go:581] duration metric: took 10.903557821s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:50:06.010531  165449 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:50:06.194606  165449 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:50:06.194633  165449 node_conditions.go:123] node cpu capacity is 2
	I1212 23:50:06.194642  165449 node_conditions.go:105] duration metric: took 184.105597ms to run NodePressure ...
	I1212 23:50:06.194653  165449 start.go:228] waiting for startup goroutines ...
	I1212 23:50:06.194659  165449 start.go:233] waiting for cluster config update ...
	I1212 23:50:06.194667  165449 start.go:242] writing updated cluster config ...
	I1212 23:50:06.194931  165449 ssh_runner.go:195] Run: rm -f paused
	I1212 23:50:06.241651  165449 start.go:600] kubectl: 1.28.4, cluster: 1.24.4 (minor skew: 4)
	I1212 23:50:06.243720  165449 out.go:177] 
	W1212 23:50:06.245380  165449 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.24.4.
	I1212 23:50:06.246829  165449 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1212 23:50:06.248216  165449 out.go:177] * Done! kubectl is now configured to use "test-preload-324893" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:49:11 UTC, ends at Tue 2023-12-12 23:50:07 UTC. --
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.260424937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86375848fa9f2460c5fff0685842170b244e49c134f8b6556beee2e166912d09,PodSandboxId:193f85ebb3c9b32ce36e365669cf8f3d0009434afab9a4d1ca781777a452d906,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1702424997689620722,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-kwcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cbe91ce-99d8-472a-81b0-b47cf434b399,},Annotations:map[string]string{io.kubernetes.container.hash: 2886a785,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0909322b4d188821591813127563657c2a6c058d46aa8c5de4a17ff6ebfa48f,PodSandboxId:39d3d4fc101ac49adb2a8a271e89a7a2b0e0f3b4aa0675bee00de60db2b1b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702424994655734146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: a821573f-0652-4889-94b0-d64aa606975a,},Annotations:map[string]string{io.kubernetes.container.hash: 72680057,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a0b88bb4718ab1f0f65f88700e46ce0f7c4009462e7a2b4e3c64f2454f323f,PodSandboxId:db540cf7d1b3c7f4f8e78955454135dfd23a05208451be3b70377f5904aefdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1702424994366022568,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tm5bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a5f4273-ba3f-4cd5-a836-7748d629f49d,},Annotations:map[string]string{io.kubernetes.container.hash: a560ba5a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6d135232b4e19f1fcbef5a0ff93852986b03af3c1cc52d180597fa96e461626,PodSandboxId:2f03dbd492195e6a4d3497357dd7471dc70dc32bf4afe8d6f908682233bf5e51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1702424986821791073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7d060f5d9
b80c854427c443d2ad7c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57002effe57d0003b6f2adb345201ee154eed9a283e9f024734962d24edd15e0,PodSandboxId:a79c8abd90674fb6160d65d8a03f6cedf2cb37e3cbedc43b4411595e06f5474a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1702424986645615975,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ef43cf97f948c4226b82088893f455cb,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b5a61944007f77c233a094fa1fd11357a4c7d851ac14cf4e9851d25e5d204c,PodSandboxId:032b3c9be9ce465f8a11752b7983a8690d06ea8a2a19ec858fab823164935caf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1702424986492303603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cffd9fbe2bf007be45b1a10b72dbb796,},Annotat
ions:map[string]string{io.kubernetes.container.hash: c3f757d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ecb05946c3fc28944e011f558f4b19415908a05b93f68ff0bc232388338add,PodSandboxId:0c2bcff2fe338999f302ed8e7c97e93d8d137ab2922e8310e94f558985ec0106,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1702424986319496604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4da0050ad49a4f24777c2cb2f8794851,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 837940f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aac56467-8c45-4c01-b412-358b307cc874 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.282445364Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8d1a4bdf-26e9-4434-b12a-df92fd1d267f name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.282790336Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:193f85ebb3c9b32ce36e365669cf8f3d0009434afab9a4d1ca781777a452d906,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-kwcm4,Uid:0cbe91ce-99d8-472a-81b0-b47cf434b399,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424997123782963,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-kwcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cbe91ce-99d8-472a-81b0-b47cf434b399,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T23:49:53.177558551Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:39d3d4fc101ac49adb2a8a271e89a7a2b0e0f3b4aa0675bee00de60db2b1b055,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a821573f-0652-4889-94b0-d64aa606975a,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424994108922483,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a821573f-0652-4889-94b0-d64aa606975a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-12T23:49:53.177557375Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db540cf7d1b3c7f4f8e78955454135dfd23a05208451be3b70377f5904aefdd3,Metadata:&PodSandboxMetadata{Name:kube-proxy-tm5bf,Uid:3a5f4273-ba3f-4cd5-a836-7748d629f49d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424993803913610,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tm5bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5f4273-ba3f-4cd5-a836-7748d629f49d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T23:49:53.177555107Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a79c8abd90674fb6160d65d8a03f6cedf2cb37e3cbedc43b4411595e06f5474a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-324893,Ui
d:ef43cf97f948c4226b82088893f455cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424985785067936,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef43cf97f948c4226b82088893f455cb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ef43cf97f948c4226b82088893f455cb,kubernetes.io/config.seen: 2023-12-12T23:49:45.158906993Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2f03dbd492195e6a4d3497357dd7471dc70dc32bf4afe8d6f908682233bf5e51,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-324893,Uid:f7d060f5d9b80c854427c443d2ad7c8c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424985766931189,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-324893,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7d060f5d9b80c854427c443d2ad7c8c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f7d060f5d9b80c854427c443d2ad7c8c,kubernetes.io/config.seen: 2023-12-12T23:49:45.158908178Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:032b3c9be9ce465f8a11752b7983a8690d06ea8a2a19ec858fab823164935caf,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-324893,Uid:cffd9fbe2bf007be45b1a10b72dbb796,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424985757077884,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cffd9fbe2bf007be45b1a10b72dbb796,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.69:2379,kubernetes.io/config.hash: cffd9fbe2bf007be45b1a10b72dbb796,kubernetes.io/config.seen: 2023-12-12T23:
49:45.171425327Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0c2bcff2fe338999f302ed8e7c97e93d8d137ab2922e8310e94f558985ec0106,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-324893,Uid:4da0050ad49a4f24777c2cb2f8794851,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424985719906638,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4da0050ad49a4f24777c2cb2f8794851,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.69:8443,kubernetes.io/config.hash: 4da0050ad49a4f24777c2cb2f8794851,kubernetes.io/config.seen: 2023-12-12T23:49:45.158891762Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=8d1a4bdf-26e9-4434-b12a-df92fd1d267f name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.283555677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4ee96710-5055-4418-a0e3-e4295cd43b29 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.283675041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4ee96710-5055-4418-a0e3-e4295cd43b29 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.283831817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86375848fa9f2460c5fff0685842170b244e49c134f8b6556beee2e166912d09,PodSandboxId:193f85ebb3c9b32ce36e365669cf8f3d0009434afab9a4d1ca781777a452d906,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1702424997689620722,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-kwcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cbe91ce-99d8-472a-81b0-b47cf434b399,},Annotations:map[string]string{io.kubernetes.container.hash: 2886a785,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0909322b4d188821591813127563657c2a6c058d46aa8c5de4a17ff6ebfa48f,PodSandboxId:39d3d4fc101ac49adb2a8a271e89a7a2b0e0f3b4aa0675bee00de60db2b1b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702424994655734146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: a821573f-0652-4889-94b0-d64aa606975a,},Annotations:map[string]string{io.kubernetes.container.hash: 72680057,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a0b88bb4718ab1f0f65f88700e46ce0f7c4009462e7a2b4e3c64f2454f323f,PodSandboxId:db540cf7d1b3c7f4f8e78955454135dfd23a05208451be3b70377f5904aefdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1702424994366022568,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tm5bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a5f4273-ba3f-4cd5-a836-7748d629f49d,},Annotations:map[string]string{io.kubernetes.container.hash: a560ba5a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6d135232b4e19f1fcbef5a0ff93852986b03af3c1cc52d180597fa96e461626,PodSandboxId:2f03dbd492195e6a4d3497357dd7471dc70dc32bf4afe8d6f908682233bf5e51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1702424986821791073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7d060f5d9
b80c854427c443d2ad7c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57002effe57d0003b6f2adb345201ee154eed9a283e9f024734962d24edd15e0,PodSandboxId:a79c8abd90674fb6160d65d8a03f6cedf2cb37e3cbedc43b4411595e06f5474a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1702424986645615975,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ef43cf97f948c4226b82088893f455cb,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b5a61944007f77c233a094fa1fd11357a4c7d851ac14cf4e9851d25e5d204c,PodSandboxId:032b3c9be9ce465f8a11752b7983a8690d06ea8a2a19ec858fab823164935caf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1702424986492303603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cffd9fbe2bf007be45b1a10b72dbb796,},Annotat
ions:map[string]string{io.kubernetes.container.hash: c3f757d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ecb05946c3fc28944e011f558f4b19415908a05b93f68ff0bc232388338add,PodSandboxId:0c2bcff2fe338999f302ed8e7c97e93d8d137ab2922e8310e94f558985ec0106,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1702424986319496604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4da0050ad49a4f24777c2cb2f8794851,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 837940f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4ee96710-5055-4418-a0e3-e4295cd43b29 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.284441391Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=472489e7-f3e1-4009-b7e5-da7eda9bbb0a name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.284596286Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:193f85ebb3c9b32ce36e365669cf8f3d0009434afab9a4d1ca781777a452d906,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-kwcm4,Uid:0cbe91ce-99d8-472a-81b0-b47cf434b399,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424997123782963,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-kwcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cbe91ce-99d8-472a-81b0-b47cf434b399,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T23:49:53.177558551Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:39d3d4fc101ac49adb2a8a271e89a7a2b0e0f3b4aa0675bee00de60db2b1b055,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a821573f-0652-4889-94b0-d64aa606975a,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424994108922483,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a821573f-0652-4889-94b0-d64aa606975a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-12T23:49:53.177557375Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db540cf7d1b3c7f4f8e78955454135dfd23a05208451be3b70377f5904aefdd3,Metadata:&PodSandboxMetadata{Name:kube-proxy-tm5bf,Uid:3a5f4273-ba3f-4cd5-a836-7748d629f49d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424993803913610,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tm5bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a5f4273-ba3f-4cd5-a836-7748d629f49d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T23:49:53.177555107Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a79c8abd90674fb6160d65d8a03f6cedf2cb37e3cbedc43b4411595e06f5474a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-324893,Ui
d:ef43cf97f948c4226b82088893f455cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424985785067936,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef43cf97f948c4226b82088893f455cb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ef43cf97f948c4226b82088893f455cb,kubernetes.io/config.seen: 2023-12-12T23:49:45.158906993Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2f03dbd492195e6a4d3497357dd7471dc70dc32bf4afe8d6f908682233bf5e51,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-324893,Uid:f7d060f5d9b80c854427c443d2ad7c8c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424985766931189,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-324893,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7d060f5d9b80c854427c443d2ad7c8c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f7d060f5d9b80c854427c443d2ad7c8c,kubernetes.io/config.seen: 2023-12-12T23:49:45.158908178Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:032b3c9be9ce465f8a11752b7983a8690d06ea8a2a19ec858fab823164935caf,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-324893,Uid:cffd9fbe2bf007be45b1a10b72dbb796,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424985757077884,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cffd9fbe2bf007be45b1a10b72dbb796,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.69:2379,kubernetes.io/config.hash: cffd9fbe2bf007be45b1a10b72dbb796,kubernetes.io/config.seen: 2023-12-12T23:
49:45.171425327Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0c2bcff2fe338999f302ed8e7c97e93d8d137ab2922e8310e94f558985ec0106,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-324893,Uid:4da0050ad49a4f24777c2cb2f8794851,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702424985719906638,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4da0050ad49a4f24777c2cb2f8794851,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.69:8443,kubernetes.io/config.hash: 4da0050ad49a4f24777c2cb2f8794851,kubernetes.io/config.seen: 2023-12-12T23:49:45.158891762Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=472489e7-f3e1-4009-b7e5-da7eda9bbb0a name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.285203144Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6e0a722d-5aaf-40e0-b596-34471b8c82a9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.285267850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6e0a722d-5aaf-40e0-b596-34471b8c82a9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.285408048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86375848fa9f2460c5fff0685842170b244e49c134f8b6556beee2e166912d09,PodSandboxId:193f85ebb3c9b32ce36e365669cf8f3d0009434afab9a4d1ca781777a452d906,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1702424997689620722,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-kwcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cbe91ce-99d8-472a-81b0-b47cf434b399,},Annotations:map[string]string{io.kubernetes.container.hash: 2886a785,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0909322b4d188821591813127563657c2a6c058d46aa8c5de4a17ff6ebfa48f,PodSandboxId:39d3d4fc101ac49adb2a8a271e89a7a2b0e0f3b4aa0675bee00de60db2b1b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702424994655734146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: a821573f-0652-4889-94b0-d64aa606975a,},Annotations:map[string]string{io.kubernetes.container.hash: 72680057,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a0b88bb4718ab1f0f65f88700e46ce0f7c4009462e7a2b4e3c64f2454f323f,PodSandboxId:db540cf7d1b3c7f4f8e78955454135dfd23a05208451be3b70377f5904aefdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1702424994366022568,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tm5bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a5f4273-ba3f-4cd5-a836-7748d629f49d,},Annotations:map[string]string{io.kubernetes.container.hash: a560ba5a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6d135232b4e19f1fcbef5a0ff93852986b03af3c1cc52d180597fa96e461626,PodSandboxId:2f03dbd492195e6a4d3497357dd7471dc70dc32bf4afe8d6f908682233bf5e51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1702424986821791073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7d060f5d9
b80c854427c443d2ad7c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57002effe57d0003b6f2adb345201ee154eed9a283e9f024734962d24edd15e0,PodSandboxId:a79c8abd90674fb6160d65d8a03f6cedf2cb37e3cbedc43b4411595e06f5474a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1702424986645615975,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ef43cf97f948c4226b82088893f455cb,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b5a61944007f77c233a094fa1fd11357a4c7d851ac14cf4e9851d25e5d204c,PodSandboxId:032b3c9be9ce465f8a11752b7983a8690d06ea8a2a19ec858fab823164935caf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1702424986492303603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cffd9fbe2bf007be45b1a10b72dbb796,},Annotat
ions:map[string]string{io.kubernetes.container.hash: c3f757d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ecb05946c3fc28944e011f558f4b19415908a05b93f68ff0bc232388338add,PodSandboxId:0c2bcff2fe338999f302ed8e7c97e93d8d137ab2922e8310e94f558985ec0106,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1702424986319496604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4da0050ad49a4f24777c2cb2f8794851,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 837940f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6e0a722d-5aaf-40e0-b596-34471b8c82a9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.298804618Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8bd0db17-4207-45a4-bc5b-67584361d1a6 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.298870807Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8bd0db17-4207-45a4-bc5b-67584361d1a6 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.300061441Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=20947b11-2179-4339-8bf4-6765b7eb0fee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.300453730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702425007300443734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=20947b11-2179-4339-8bf4-6765b7eb0fee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.301114527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=201c658d-03ef-43e3-baa5-24a58d9d6c59 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.301159465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=201c658d-03ef-43e3-baa5-24a58d9d6c59 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.301351926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86375848fa9f2460c5fff0685842170b244e49c134f8b6556beee2e166912d09,PodSandboxId:193f85ebb3c9b32ce36e365669cf8f3d0009434afab9a4d1ca781777a452d906,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1702424997689620722,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-kwcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cbe91ce-99d8-472a-81b0-b47cf434b399,},Annotations:map[string]string{io.kubernetes.container.hash: 2886a785,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0909322b4d188821591813127563657c2a6c058d46aa8c5de4a17ff6ebfa48f,PodSandboxId:39d3d4fc101ac49adb2a8a271e89a7a2b0e0f3b4aa0675bee00de60db2b1b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702424994655734146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: a821573f-0652-4889-94b0-d64aa606975a,},Annotations:map[string]string{io.kubernetes.container.hash: 72680057,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a0b88bb4718ab1f0f65f88700e46ce0f7c4009462e7a2b4e3c64f2454f323f,PodSandboxId:db540cf7d1b3c7f4f8e78955454135dfd23a05208451be3b70377f5904aefdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1702424994366022568,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tm5bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a5f4273-ba3f-4cd5-a836-7748d629f49d,},Annotations:map[string]string{io.kubernetes.container.hash: a560ba5a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6d135232b4e19f1fcbef5a0ff93852986b03af3c1cc52d180597fa96e461626,PodSandboxId:2f03dbd492195e6a4d3497357dd7471dc70dc32bf4afe8d6f908682233bf5e51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1702424986821791073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7d060f5d9
b80c854427c443d2ad7c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57002effe57d0003b6f2adb345201ee154eed9a283e9f024734962d24edd15e0,PodSandboxId:a79c8abd90674fb6160d65d8a03f6cedf2cb37e3cbedc43b4411595e06f5474a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1702424986645615975,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ef43cf97f948c4226b82088893f455cb,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b5a61944007f77c233a094fa1fd11357a4c7d851ac14cf4e9851d25e5d204c,PodSandboxId:032b3c9be9ce465f8a11752b7983a8690d06ea8a2a19ec858fab823164935caf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1702424986492303603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cffd9fbe2bf007be45b1a10b72dbb796,},Annotat
ions:map[string]string{io.kubernetes.container.hash: c3f757d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ecb05946c3fc28944e011f558f4b19415908a05b93f68ff0bc232388338add,PodSandboxId:0c2bcff2fe338999f302ed8e7c97e93d8d137ab2922e8310e94f558985ec0106,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1702424986319496604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4da0050ad49a4f24777c2cb2f8794851,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 837940f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=201c658d-03ef-43e3-baa5-24a58d9d6c59 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.336548723Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=466fd834-39a5-46e8-b24b-16f9d284b095 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.336688168Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=466fd834-39a5-46e8-b24b-16f9d284b095 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.338854654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b8b214f2-a3d5-4353-a9f7-70ef38972de3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.342202383Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702425007342184871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=b8b214f2-a3d5-4353-a9f7-70ef38972de3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.344715048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ef9cdec5-9c2c-4886-bab3-6a0db1f16975 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.344759439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ef9cdec5-9c2c-4886-bab3-6a0db1f16975 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:50:07 test-preload-324893 crio[698]: time="2023-12-12 23:50:07.344921798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86375848fa9f2460c5fff0685842170b244e49c134f8b6556beee2e166912d09,PodSandboxId:193f85ebb3c9b32ce36e365669cf8f3d0009434afab9a4d1ca781777a452d906,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1702424997689620722,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-kwcm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cbe91ce-99d8-472a-81b0-b47cf434b399,},Annotations:map[string]string{io.kubernetes.container.hash: 2886a785,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0909322b4d188821591813127563657c2a6c058d46aa8c5de4a17ff6ebfa48f,PodSandboxId:39d3d4fc101ac49adb2a8a271e89a7a2b0e0f3b4aa0675bee00de60db2b1b055,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702424994655734146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: a821573f-0652-4889-94b0-d64aa606975a,},Annotations:map[string]string{io.kubernetes.container.hash: 72680057,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a0b88bb4718ab1f0f65f88700e46ce0f7c4009462e7a2b4e3c64f2454f323f,PodSandboxId:db540cf7d1b3c7f4f8e78955454135dfd23a05208451be3b70377f5904aefdd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1702424994366022568,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tm5bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a5f4273-ba3f-4cd5-a836-7748d629f49d,},Annotations:map[string]string{io.kubernetes.container.hash: a560ba5a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6d135232b4e19f1fcbef5a0ff93852986b03af3c1cc52d180597fa96e461626,PodSandboxId:2f03dbd492195e6a4d3497357dd7471dc70dc32bf4afe8d6f908682233bf5e51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1702424986821791073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7d060f5d9
b80c854427c443d2ad7c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57002effe57d0003b6f2adb345201ee154eed9a283e9f024734962d24edd15e0,PodSandboxId:a79c8abd90674fb6160d65d8a03f6cedf2cb37e3cbedc43b4411595e06f5474a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1702424986645615975,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ef43cf97f948c4226b82088893f455cb,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b5a61944007f77c233a094fa1fd11357a4c7d851ac14cf4e9851d25e5d204c,PodSandboxId:032b3c9be9ce465f8a11752b7983a8690d06ea8a2a19ec858fab823164935caf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1702424986492303603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cffd9fbe2bf007be45b1a10b72dbb796,},Annotat
ions:map[string]string{io.kubernetes.container.hash: c3f757d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ecb05946c3fc28944e011f558f4b19415908a05b93f68ff0bc232388338add,PodSandboxId:0c2bcff2fe338999f302ed8e7c97e93d8d137ab2922e8310e94f558985ec0106,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1702424986319496604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-324893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4da0050ad49a4f24777c2cb2f8794851,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 837940f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ef9cdec5-9c2c-4886-bab3-6a0db1f16975 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	86375848fa9f2       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   193f85ebb3c9b       coredns-6d4b75cb6d-kwcm4
	a0909322b4d18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       2                   39d3d4fc101ac       storage-provisioner
	d7a0b88bb4718       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   db540cf7d1b3c       kube-proxy-tm5bf
	e6d135232b4e1       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   2f03dbd492195       kube-scheduler-test-preload-324893
	57002effe57d0       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   a79c8abd90674       kube-controller-manager-test-preload-324893
	12b5a61944007       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   032b3c9be9ce4       etcd-test-preload-324893
	b8ecb05946c3f       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   0c2bcff2fe338       kube-apiserver-test-preload-324893
	
	* 
	* ==> coredns [86375848fa9f2460c5fff0685842170b244e49c134f8b6556beee2e166912d09] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:45945 - 56959 "HINFO IN 1673537452051669968.5860801291556146646. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010616043s
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-324893
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-324893
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=test-preload-324893
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_47_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:47:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-324893
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:50:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:50:02 +0000   Tue, 12 Dec 2023 23:47:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:50:02 +0000   Tue, 12 Dec 2023 23:47:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:50:02 +0000   Tue, 12 Dec 2023 23:47:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:50:02 +0000   Tue, 12 Dec 2023 23:50:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    test-preload-324893
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eb1d4fa7d6345f98b64b84c258b3d23
	  System UUID:                0eb1d4fa-7d63-45f9-8b64-b84c258b3d23
	  Boot ID:                    1708eef8-41ac-41d0-9aca-487cf2ae5e66
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-kwcm4                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m14s
	  kube-system                 etcd-test-preload-324893                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-apiserver-test-preload-324893             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-controller-manager-test-preload-324893    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-proxy-tm5bf                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 kube-scheduler-test-preload-324893             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12s                    kube-proxy       
	  Normal  Starting                 2m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m36s (x4 over 2m36s)  kubelet          Node test-preload-324893 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m36s (x4 over 2m36s)  kubelet          Node test-preload-324893 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m36s (x4 over 2m36s)  kubelet          Node test-preload-324893 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m27s                  kubelet          Node test-preload-324893 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s                  kubelet          Node test-preload-324893 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s                  kubelet          Node test-preload-324893 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m17s                  kubelet          Node test-preload-324893 status is now: NodeReady
	  Normal  RegisteredNode           2m15s                  node-controller  Node test-preload-324893 event: Registered Node test-preload-324893 in Controller
	  Normal  Starting                 22s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)      kubelet          Node test-preload-324893 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)      kubelet          Node test-preload-324893 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)      kubelet          Node test-preload-324893 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                     node-controller  Node test-preload-324893 event: Registered Node test-preload-324893 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec12 23:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066832] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.359262] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.563826] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150892] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.539425] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.899350] systemd-fstab-generator[623]: Ignoring "noauto" for root device
	[  +0.120182] systemd-fstab-generator[634]: Ignoring "noauto" for root device
	[  +0.134639] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.100134] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.197184] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[ +24.330345] systemd-fstab-generator[1085]: Ignoring "noauto" for root device
	[  +9.802661] kauditd_printk_skb: 7 callbacks suppressed
	[Dec12 23:50] kauditd_printk_skb: 13 callbacks suppressed
	
	* 
	* ==> etcd [12b5a61944007f77c233a094fa1fd11357a4c7d851ac14cf4e9851d25e5d204c] <==
	* {"level":"info","ts":"2023-12-12T23:49:48.258Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"9199217ddd03919b","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-12-12T23:49:48.260Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T23:49:48.260Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-12-12T23:49:48.262Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2023-12-12T23:49:48.263Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2023-12-12T23:49:48.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b switched to configuration voters=(10491453631398908315)"}
	{"level":"info","ts":"2023-12-12T23:49:48.263Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","added-peer-id":"9199217ddd03919b","added-peer-peer-urls":["https://192.168.39.69:2380"]}
	{"level":"info","ts":"2023-12-12T23:49:48.263Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:49:48.263Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:49:48.267Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9199217ddd03919b","initial-advertise-peer-urls":["https://192.168.39.69:2380"],"listen-peer-urls":["https://192.168.39.69:2380"],"advertise-client-urls":["https://192.168.39.69:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.69:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T23:49:48.267Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T23:49:49.815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T23:49:49.815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:49:49.815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgPreVoteResp from 9199217ddd03919b at term 2"}
	{"level":"info","ts":"2023-12-12T23:49:49.815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T23:49:49.815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgVoteResp from 9199217ddd03919b at term 3"}
	{"level":"info","ts":"2023-12-12T23:49:49.815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became leader at term 3"}
	{"level":"info","ts":"2023-12-12T23:49:49.815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9199217ddd03919b elected leader 9199217ddd03919b at term 3"}
	{"level":"info","ts":"2023-12-12T23:49:49.817Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9199217ddd03919b","local-member-attributes":"{Name:test-preload-324893 ClientURLs:[https://192.168.39.69:2379]}","request-path":"/0/members/9199217ddd03919b/attributes","cluster-id":"6c21f62219c1156b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:49:49.817Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:49:49.818Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:49:49.818Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:49:49.818Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:49:49.819Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:49:49.819Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.69:2379"}
	
	* 
	* ==> kernel <==
	*  23:50:07 up 1 min,  0 users,  load average: 0.74, 0.25, 0.09
	Linux test-preload-324893 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b8ecb05946c3fc28944e011f558f4b19415908a05b93f68ff0bc232388338add] <==
	* I1212 23:49:52.264451       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1212 23:49:52.264480       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1212 23:49:52.264515       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1212 23:49:52.264536       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1212 23:49:52.288990       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1212 23:49:52.292766       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 23:49:52.366496       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1212 23:49:52.400047       1 shared_informer.go:262] Caches are synced for node_authorizer
	E1212 23:49:52.426186       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1212 23:49:52.431807       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:49:52.437284       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1212 23:49:52.440316       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 23:49:52.445486       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:49:52.445827       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:49:52.469207       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1212 23:49:52.917445       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1212 23:49:53.240911       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:49:54.011428       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1212 23:49:54.021970       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1212 23:49:54.068746       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1212 23:49:54.103012       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:49:54.114298       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:49:54.758186       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1212 23:50:04.741331       1 controller.go:611] quota admission added evaluator for: endpoints
	I1212 23:50:04.887246       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [57002effe57d0003b6f2adb345201ee154eed9a283e9f024734962d24edd15e0] <==
	* I1212 23:50:04.734806       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1212 23:50:04.736302       1 shared_informer.go:262] Caches are synced for GC
	I1212 23:50:04.737524       1 shared_informer.go:262] Caches are synced for deployment
	I1212 23:50:04.738128       1 shared_informer.go:262] Caches are synced for service account
	I1212 23:50:04.742365       1 shared_informer.go:262] Caches are synced for TTL
	I1212 23:50:04.743872       1 shared_informer.go:262] Caches are synced for taint
	I1212 23:50:04.743984       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1212 23:50:04.744102       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-324893. Assuming now as a timestamp.
	I1212 23:50:04.744151       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1212 23:50:04.744811       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1212 23:50:04.746705       1 event.go:294] "Event occurred" object="test-preload-324893" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-324893 event: Registered Node test-preload-324893 in Controller"
	I1212 23:50:04.749967       1 shared_informer.go:262] Caches are synced for expand
	I1212 23:50:04.751289       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1212 23:50:04.754607       1 shared_informer.go:262] Caches are synced for attach detach
	I1212 23:50:04.758786       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1212 23:50:04.875160       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1212 23:50:04.908196       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1212 23:50:04.909443       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1212 23:50:04.910685       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1212 23:50:04.910768       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1212 23:50:04.922409       1 shared_informer.go:262] Caches are synced for resource quota
	I1212 23:50:04.944623       1 shared_informer.go:262] Caches are synced for resource quota
	I1212 23:50:05.379191       1 shared_informer.go:262] Caches are synced for garbage collector
	I1212 23:50:05.432281       1 shared_informer.go:262] Caches are synced for garbage collector
	I1212 23:50:05.432362       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [d7a0b88bb4718ab1f0f65f88700e46ce0f7c4009462e7a2b4e3c64f2454f323f] <==
	* I1212 23:49:54.671967       1 node.go:163] Successfully retrieved node IP: 192.168.39.69
	I1212 23:49:54.672036       1 server_others.go:138] "Detected node IP" address="192.168.39.69"
	I1212 23:49:54.672057       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1212 23:49:54.746215       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1212 23:49:54.746254       1 server_others.go:206] "Using iptables Proxier"
	I1212 23:49:54.746278       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1212 23:49:54.746567       1 server.go:661] "Version info" version="v1.24.4"
	I1212 23:49:54.746600       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:49:54.747190       1 config.go:317] "Starting service config controller"
	I1212 23:49:54.747240       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1212 23:49:54.747261       1 config.go:226] "Starting endpoint slice config controller"
	I1212 23:49:54.747265       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1212 23:49:54.752762       1 config.go:444] "Starting node config controller"
	I1212 23:49:54.752872       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1212 23:49:54.848125       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1212 23:49:54.848144       1 shared_informer.go:262] Caches are synced for service config
	I1212 23:49:54.853397       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e6d135232b4e19f1fcbef5a0ff93852986b03af3c1cc52d180597fa96e461626] <==
	* I1212 23:49:48.913843       1 serving.go:348] Generated self-signed cert in-memory
	W1212 23:49:52.305748       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 23:49:52.306712       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:49:52.306771       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 23:49:52.306803       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 23:49:52.359096       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1212 23:49:52.359250       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:49:52.373158       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1212 23:49:52.373806       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 23:49:52.378843       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:49:52.373831       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 23:49:52.480250       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:49:11 UTC, ends at Tue 2023-12-12 23:50:07 UTC. --
	Dec 12 23:49:52 test-preload-324893 kubelet[1091]: I1212 23:49:52.341864    1091 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 23:49:52 test-preload-324893 kubelet[1091]: E1212 23:49:52.342411    1091 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Dec 12 23:49:52 test-preload-324893 kubelet[1091]: I1212 23:49:52.435174    1091 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-324893"
	Dec 12 23:49:52 test-preload-324893 kubelet[1091]: I1212 23:49:52.435266    1091 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-324893"
	Dec 12 23:49:52 test-preload-324893 kubelet[1091]: I1212 23:49:52.470858    1091 setters.go:532] "Node became not ready" node="test-preload-324893" condition={Type:Ready Status:False LastHeartbeatTime:2023-12-12 23:49:52.470780823 +0000 UTC m=+7.466262660 LastTransitionTime:2023-12-12 23:49:52.470780823 +0000 UTC m=+7.466262660 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: I1212 23:49:53.173398    1091 apiserver.go:52] "Watching apiserver"
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: I1212 23:49:53.177794    1091 topology_manager.go:200] "Topology Admit Handler"
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: I1212 23:49:53.177903    1091 topology_manager.go:200] "Topology Admit Handler"
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: I1212 23:49:53.177944    1091 topology_manager.go:200] "Topology Admit Handler"
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: E1212 23:49:53.179811    1091 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-kwcm4" podUID=0cbe91ce-99d8-472a-81b0-b47cf434b399
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: I1212 23:49:53.240897    1091 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a5f4273-ba3f-4cd5-a836-7748d629f49d-xtables-lock\") pod \"kube-proxy-tm5bf\" (UID: \"3a5f4273-ba3f-4cd5-a836-7748d629f49d\") " pod="kube-system/kube-proxy-tm5bf"
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: I1212 23:49:53.240968    1091 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a821573f-0652-4889-94b0-d64aa606975a-tmp\") pod \"storage-provisioner\" (UID: \"a821573f-0652-4889-94b0-d64aa606975a\") " pod="kube-system/storage-provisioner"
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: I1212 23:49:53.240994    1091 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a5f4273-ba3f-4cd5-a836-7748d629f49d-kube-proxy\") pod \"kube-proxy-tm5bf\" (UID: \"3a5f4273-ba3f-4cd5-a836-7748d629f49d\") " pod="kube-system/kube-proxy-tm5bf"
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: I1212 23:49:53.241012    1091 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a5f4273-ba3f-4cd5-a836-7748d629f49d-lib-modules\") pod \"kube-proxy-tm5bf\" (UID: \"3a5f4273-ba3f-4cd5-a836-7748d629f49d\") " pod="kube-system/kube-proxy-tm5bf"
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: I1212 23:49:53.241035    1091 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w67qk\" (UniqueName: \"kubernetes.io/projected/3a5f4273-ba3f-4cd5-a836-7748d629f49d-kube-api-access-w67qk\") pod \"kube-proxy-tm5bf\" (UID: \"3a5f4273-ba3f-4cd5-a836-7748d629f49d\") " pod="kube-system/kube-proxy-tm5bf"
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: I1212 23:49:53.241055    1091 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cbe91ce-99d8-472a-81b0-b47cf434b399-config-volume\") pod \"coredns-6d4b75cb6d-kwcm4\" (UID: \"0cbe91ce-99d8-472a-81b0-b47cf434b399\") " pod="kube-system/coredns-6d4b75cb6d-kwcm4"
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: I1212 23:49:53.241074    1091 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlv2\" (UniqueName: \"kubernetes.io/projected/0cbe91ce-99d8-472a-81b0-b47cf434b399-kube-api-access-7rlv2\") pod \"coredns-6d4b75cb6d-kwcm4\" (UID: \"0cbe91ce-99d8-472a-81b0-b47cf434b399\") " pod="kube-system/coredns-6d4b75cb6d-kwcm4"
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: I1212 23:49:53.241092    1091 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsmnw\" (UniqueName: \"kubernetes.io/projected/a821573f-0652-4889-94b0-d64aa606975a-kube-api-access-lsmnw\") pod \"storage-provisioner\" (UID: \"a821573f-0652-4889-94b0-d64aa606975a\") " pod="kube-system/storage-provisioner"
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: I1212 23:49:53.241102    1091 reconciler.go:159] "Reconciler: start to sync state"
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: E1212 23:49:53.344774    1091 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: E1212 23:49:53.344900    1091 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/0cbe91ce-99d8-472a-81b0-b47cf434b399-config-volume podName:0cbe91ce-99d8-472a-81b0-b47cf434b399 nodeName:}" failed. No retries permitted until 2023-12-12 23:49:53.844862087 +0000 UTC m=+8.840343938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0cbe91ce-99d8-472a-81b0-b47cf434b399-config-volume") pod "coredns-6d4b75cb6d-kwcm4" (UID: "0cbe91ce-99d8-472a-81b0-b47cf434b399") : object "kube-system"/"coredns" not registered
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: E1212 23:49:53.849424    1091 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 23:49:53 test-preload-324893 kubelet[1091]: E1212 23:49:53.849482    1091 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/0cbe91ce-99d8-472a-81b0-b47cf434b399-config-volume podName:0cbe91ce-99d8-472a-81b0-b47cf434b399 nodeName:}" failed. No retries permitted until 2023-12-12 23:49:54.849468912 +0000 UTC m=+9.844950749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0cbe91ce-99d8-472a-81b0-b47cf434b399-config-volume") pod "coredns-6d4b75cb6d-kwcm4" (UID: "0cbe91ce-99d8-472a-81b0-b47cf434b399") : object "kube-system"/"coredns" not registered
	Dec 12 23:49:54 test-preload-324893 kubelet[1091]: E1212 23:49:54.857228    1091 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 23:49:54 test-preload-324893 kubelet[1091]: E1212 23:49:54.857320    1091 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/0cbe91ce-99d8-472a-81b0-b47cf434b399-config-volume podName:0cbe91ce-99d8-472a-81b0-b47cf434b399 nodeName:}" failed. No retries permitted until 2023-12-12 23:49:56.857305745 +0000 UTC m=+11.852787583 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0cbe91ce-99d8-472a-81b0-b47cf434b399-config-volume") pod "coredns-6d4b75cb6d-kwcm4" (UID: "0cbe91ce-99d8-472a-81b0-b47cf434b399") : object "kube-system"/"coredns" not registered
	
	* 
	* ==> storage-provisioner [a0909322b4d188821591813127563657c2a6c058d46aa8c5de4a17ff6ebfa48f] <==
	* I1212 23:49:54.795576       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-324893 -n test-preload-324893
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-324893 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-324893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-324893
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-324893: (1.077755613s)
--- FAIL: TestPreload (265.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (209.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.2994062377.exe start -p running-upgrade-279020 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.2994062377.exe start -p running-upgrade-279020 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m19.986586113s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-279020 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1212 23:55:11.805296  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-279020 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m6.060116536s)

                                                
                                                
-- stdout --
	* [running-upgrade-279020] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17777
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-279020 in cluster running-upgrade-279020
	* Updating the running kvm2 "running-upgrade-279020" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 23:54:29.821101  170999 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:54:29.821430  170999 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:54:29.821441  170999 out.go:309] Setting ErrFile to fd 2...
	I1212 23:54:29.821446  170999 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:54:29.821669  170999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1212 23:54:29.822282  170999 out.go:303] Setting JSON to false
	I1212 23:54:29.823235  170999 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9418,"bootTime":1702415852,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:54:29.823300  170999 start.go:138] virtualization: kvm guest
	I1212 23:54:29.825628  170999 out.go:177] * [running-upgrade-279020] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:54:29.827237  170999 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 23:54:29.827311  170999 notify.go:220] Checking for updates...
	I1212 23:54:29.828675  170999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:54:29.830408  170999 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:54:29.831934  170999 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:54:29.833379  170999 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:54:29.834793  170999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:54:29.836578  170999 config.go:182] Loaded profile config "running-upgrade-279020": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1212 23:54:29.836597  170999 start_flags.go:694] config upgrade: Driver=kvm2
	I1212 23:54:29.836607  170999 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401
	I1212 23:54:29.836689  170999 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/running-upgrade-279020/config.json ...
	I1212 23:54:29.837285  170999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:54:29.837351  170999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:54:29.852460  170999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35989
	I1212 23:54:29.852838  170999 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:54:29.853505  170999 main.go:141] libmachine: Using API Version  1
	I1212 23:54:29.853532  170999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:54:29.853875  170999 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:54:29.854088  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .DriverName
	I1212 23:54:29.855849  170999 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1212 23:54:29.857307  170999 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:54:29.857616  170999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:54:29.857653  170999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:54:29.872310  170999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43237
	I1212 23:54:29.872746  170999 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:54:29.873203  170999 main.go:141] libmachine: Using API Version  1
	I1212 23:54:29.873226  170999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:54:29.873525  170999 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:54:29.873708  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .DriverName
	I1212 23:54:29.907984  170999 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 23:54:29.909449  170999 start.go:298] selected driver: kvm2
	I1212 23:54:29.909463  170999 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-279020 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.124 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 23:54:29.909575  170999 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:54:29.910251  170999 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:54:29.910318  170999 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:54:29.925629  170999 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:54:29.926079  170999 cni.go:84] Creating CNI manager for ""
	I1212 23:54:29.926107  170999 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1212 23:54:29.926121  170999 start_flags.go:323] config:
	{Name:running-upgrade-279020 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.124 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 23:54:29.926332  170999 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:54:29.928056  170999 out.go:177] * Starting control plane node running-upgrade-279020 in cluster running-upgrade-279020
	I1212 23:54:29.929318  170999 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1212 23:54:30.399216  170999 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1212 23:54:30.399363  170999 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/running-upgrade-279020/config.json ...
	I1212 23:54:30.399518  170999 cache.go:107] acquiring lock: {Name:mkc063f9cf1a956f30df032f33a365ae85cf30bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:54:30.399560  170999 cache.go:107] acquiring lock: {Name:mkf009f4f5b759ade33cc6ab092dff34de7b866c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:54:30.399633  170999 cache.go:115] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 23:54:30.399621  170999 cache.go:107] acquiring lock: {Name:mk98c7c868ea535fc5c65b20b21e14556e4b41e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:54:30.399563  170999 cache.go:107] acquiring lock: {Name:mk7783e1487244af2295c37e59e0ec4c3c329130 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:54:30.399656  170999 cache.go:107] acquiring lock: {Name:mk7cdb9ed76fc9fcdc2a0920615162ecebc7719c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:54:30.399664  170999 cache.go:107] acquiring lock: {Name:mk0f6e87bca244308ba30eab85c956a11b34fa55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:54:30.399688  170999 start.go:365] acquiring machines lock for running-upgrade-279020: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:54:30.399521  170999 cache.go:107] acquiring lock: {Name:mkec716019c9ae1e82965789eff0c8adc7c64400 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:54:30.399729  170999 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1212 23:54:30.399754  170999 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1212 23:54:30.399779  170999 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1212 23:54:30.399877  170999 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1212 23:54:30.399899  170999 cache.go:107] acquiring lock: {Name:mk2cea8c48c79d635f916aa6b8522279b61b7a40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:54:30.399922  170999 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1212 23:54:30.399966  170999 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1212 23:54:30.400042  170999 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1212 23:54:30.399645  170999 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 142.355µs
	I1212 23:54:30.400101  170999 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 23:54:30.401057  170999 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1212 23:54:30.401061  170999 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1212 23:54:30.401060  170999 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1212 23:54:30.401090  170999 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1212 23:54:30.401115  170999 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1212 23:54:30.401201  170999 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1212 23:54:30.401245  170999 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1212 23:54:30.544818  170999 cache.go:162] opening:  /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1212 23:54:30.563263  170999 cache.go:162] opening:  /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1212 23:54:30.601858  170999 cache.go:162] opening:  /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1212 23:54:30.605035  170999 cache.go:157] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1212 23:54:30.605069  170999 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 205.203264ms
	I1212 23:54:30.605092  170999 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1212 23:54:30.677909  170999 cache.go:162] opening:  /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1212 23:54:30.691307  170999 cache.go:162] opening:  /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1212 23:54:30.699508  170999 cache.go:162] opening:  /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1212 23:54:30.706968  170999 cache.go:162] opening:  /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1212 23:54:31.029872  170999 cache.go:157] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1212 23:54:31.029899  170999 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 630.235709ms
	I1212 23:54:31.029913  170999 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1212 23:54:31.056511  170999 cache.go:157] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1212 23:54:31.056540  170999 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 656.981538ms
	I1212 23:54:31.056555  170999 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1212 23:54:31.237469  170999 cache.go:157] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1212 23:54:31.237493  170999 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 837.870064ms
	I1212 23:54:31.237504  170999 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1212 23:54:31.332573  170999 cache.go:157] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1212 23:54:31.332600  170999 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 933.066077ms
	I1212 23:54:31.332611  170999 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1212 23:54:31.694188  170999 cache.go:157] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1212 23:54:31.694218  170999 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.294598675s
	I1212 23:54:31.694232  170999 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1212 23:54:31.790724  170999 cache.go:157] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1212 23:54:31.790771  170999 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.391264817s
	I1212 23:54:31.790783  170999 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1212 23:54:31.790797  170999 cache.go:87] Successfully saved all images to host disk.
	I1212 23:55:32.265541  170999 start.go:369] acquired machines lock for "running-upgrade-279020" in 1m1.865820229s
	I1212 23:55:32.265608  170999 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:55:32.265616  170999 fix.go:54] fixHost starting: minikube
	I1212 23:55:32.266050  170999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:55:32.266100  170999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:55:32.282371  170999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I1212 23:55:32.282778  170999 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:55:32.283301  170999 main.go:141] libmachine: Using API Version  1
	I1212 23:55:32.283326  170999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:55:32.283661  170999 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:55:32.283869  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .DriverName
	I1212 23:55:32.284035  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetState
	I1212 23:55:32.285505  170999 fix.go:102] recreateIfNeeded on running-upgrade-279020: state=Running err=<nil>
	W1212 23:55:32.285528  170999 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:55:32.287663  170999 out.go:177] * Updating the running kvm2 "running-upgrade-279020" VM ...
	I1212 23:55:32.289414  170999 machine.go:88] provisioning docker machine ...
	I1212 23:55:32.289447  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .DriverName
	I1212 23:55:32.289636  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetMachineName
	I1212 23:55:32.289800  170999 buildroot.go:166] provisioning hostname "running-upgrade-279020"
	I1212 23:55:32.289815  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetMachineName
	I1212 23:55:32.289940  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHHostname
	I1212 23:55:32.292245  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:32.292688  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:f7:fc", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:52:46 +0000 UTC Type:0 Mac:52:54:00:1e:f7:fc Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:running-upgrade-279020 Clientid:01:52:54:00:1e:f7:fc}
	I1212 23:55:32.292726  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined IP address 192.168.50.124 and MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:32.292887  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHPort
	I1212 23:55:32.293052  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHKeyPath
	I1212 23:55:32.293198  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHKeyPath
	I1212 23:55:32.293313  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHUsername
	I1212 23:55:32.293522  170999 main.go:141] libmachine: Using SSH client type: native
	I1212 23:55:32.294019  170999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I1212 23:55:32.294038  170999 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-279020 && echo "running-upgrade-279020" | sudo tee /etc/hostname
	I1212 23:55:32.437525  170999 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-279020
	
	I1212 23:55:32.437560  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHHostname
	I1212 23:55:32.440889  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:32.441282  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:f7:fc", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:52:46 +0000 UTC Type:0 Mac:52:54:00:1e:f7:fc Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:running-upgrade-279020 Clientid:01:52:54:00:1e:f7:fc}
	I1212 23:55:32.441380  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined IP address 192.168.50.124 and MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:32.441477  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHPort
	I1212 23:55:32.441700  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHKeyPath
	I1212 23:55:32.441898  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHKeyPath
	I1212 23:55:32.442074  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHUsername
	I1212 23:55:32.442253  170999 main.go:141] libmachine: Using SSH client type: native
	I1212 23:55:32.442709  170999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I1212 23:55:32.442751  170999 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-279020' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-279020/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-279020' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:55:32.569285  170999 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:55:32.569330  170999 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1212 23:55:32.569354  170999 buildroot.go:174] setting up certificates
	I1212 23:55:32.569369  170999 provision.go:83] configureAuth start
	I1212 23:55:32.569386  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetMachineName
	I1212 23:55:32.569688  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetIP
	I1212 23:55:32.572630  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:32.573088  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:f7:fc", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:52:46 +0000 UTC Type:0 Mac:52:54:00:1e:f7:fc Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:running-upgrade-279020 Clientid:01:52:54:00:1e:f7:fc}
	I1212 23:55:32.573118  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined IP address 192.168.50.124 and MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:32.573250  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHHostname
	I1212 23:55:32.575673  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:32.576450  170999 provision.go:138] copyHostCerts
	I1212 23:55:32.576473  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:f7:fc", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:52:46 +0000 UTC Type:0 Mac:52:54:00:1e:f7:fc Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:running-upgrade-279020 Clientid:01:52:54:00:1e:f7:fc}
	I1212 23:55:32.576502  170999 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1212 23:55:32.576511  170999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:55:32.576516  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined IP address 192.168.50.124 and MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:32.576576  170999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1212 23:55:32.576696  170999 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1212 23:55:32.576708  170999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:55:32.576746  170999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1212 23:55:32.576840  170999 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1212 23:55:32.576855  170999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:55:32.576933  170999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1212 23:55:32.577052  170999 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-279020 san=[192.168.50.124 192.168.50.124 localhost 127.0.0.1 minikube running-upgrade-279020]
	I1212 23:55:32.928507  170999 provision.go:172] copyRemoteCerts
	I1212 23:55:32.928597  170999 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:55:32.928630  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHHostname
	I1212 23:55:32.931462  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:32.931781  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:f7:fc", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:52:46 +0000 UTC Type:0 Mac:52:54:00:1e:f7:fc Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:running-upgrade-279020 Clientid:01:52:54:00:1e:f7:fc}
	I1212 23:55:32.931813  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined IP address 192.168.50.124 and MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:32.932098  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHPort
	I1212 23:55:32.932326  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHKeyPath
	I1212 23:55:32.932498  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHUsername
	I1212 23:55:32.932654  170999 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/running-upgrade-279020/id_rsa Username:docker}
	I1212 23:55:33.029386  170999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:55:33.055976  170999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 23:55:33.070398  170999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:55:33.084902  170999 provision.go:86] duration metric: configureAuth took 515.515221ms
	I1212 23:55:33.084931  170999 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:55:33.085111  170999 config.go:182] Loaded profile config "running-upgrade-279020": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1212 23:55:33.085206  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHHostname
	I1212 23:55:33.088627  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:33.089092  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:f7:fc", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:52:46 +0000 UTC Type:0 Mac:52:54:00:1e:f7:fc Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:running-upgrade-279020 Clientid:01:52:54:00:1e:f7:fc}
	I1212 23:55:33.089124  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined IP address 192.168.50.124 and MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:33.089295  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHPort
	I1212 23:55:33.089517  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHKeyPath
	I1212 23:55:33.089699  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHKeyPath
	I1212 23:55:33.089871  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHUsername
	I1212 23:55:33.090072  170999 main.go:141] libmachine: Using SSH client type: native
	I1212 23:55:33.090393  170999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I1212 23:55:33.090409  170999 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:55:33.649749  170999 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:55:33.649780  170999 machine.go:91] provisioned docker machine in 1.360344305s
	I1212 23:55:33.649792  170999 start.go:300] post-start starting for "running-upgrade-279020" (driver="kvm2")
	I1212 23:55:33.649805  170999 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:55:33.649834  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .DriverName
	I1212 23:55:33.650161  170999 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:55:33.650192  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHHostname
	I1212 23:55:33.653100  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:33.653455  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:f7:fc", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:52:46 +0000 UTC Type:0 Mac:52:54:00:1e:f7:fc Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:running-upgrade-279020 Clientid:01:52:54:00:1e:f7:fc}
	I1212 23:55:33.653505  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined IP address 192.168.50.124 and MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:33.653626  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHPort
	I1212 23:55:33.653817  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHKeyPath
	I1212 23:55:33.654014  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHUsername
	I1212 23:55:33.654199  170999 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/running-upgrade-279020/id_rsa Username:docker}
	I1212 23:55:33.755904  170999 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:55:33.760324  170999 info.go:137] Remote host: Buildroot 2019.02.7
	I1212 23:55:33.760354  170999 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1212 23:55:33.760454  170999 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1212 23:55:33.760561  170999 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1212 23:55:33.760682  170999 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:55:33.767520  170999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:55:33.782099  170999 start.go:303] post-start completed in 132.291862ms
	I1212 23:55:33.782118  170999 fix.go:56] fixHost completed within 1.516503417s
	I1212 23:55:33.782150  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHHostname
	I1212 23:55:33.784960  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:33.785340  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:f7:fc", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:52:46 +0000 UTC Type:0 Mac:52:54:00:1e:f7:fc Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:running-upgrade-279020 Clientid:01:52:54:00:1e:f7:fc}
	I1212 23:55:33.785364  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined IP address 192.168.50.124 and MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:33.785548  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHPort
	I1212 23:55:33.785768  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHKeyPath
	I1212 23:55:33.785938  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHKeyPath
	I1212 23:55:33.786122  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHUsername
	I1212 23:55:33.786303  170999 main.go:141] libmachine: Using SSH client type: native
	I1212 23:55:33.786814  170999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I1212 23:55:33.786835  170999 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 23:55:33.915188  170999 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702425333.910370296
	
	I1212 23:55:33.915224  170999 fix.go:206] guest clock: 1702425333.910370296
	I1212 23:55:33.915236  170999 fix.go:219] Guest: 2023-12-12 23:55:33.910370296 +0000 UTC Remote: 2023-12-12 23:55:33.782122389 +0000 UTC m=+64.010664004 (delta=128.247907ms)
	I1212 23:55:33.915298  170999 fix.go:190] guest clock delta is within tolerance: 128.247907ms
	I1212 23:55:33.915342  170999 start.go:83] releasing machines lock for "running-upgrade-279020", held for 1.649732849s
	I1212 23:55:33.915397  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .DriverName
	I1212 23:55:33.915700  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetIP
	I1212 23:55:33.919069  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:33.919542  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:f7:fc", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:52:46 +0000 UTC Type:0 Mac:52:54:00:1e:f7:fc Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:running-upgrade-279020 Clientid:01:52:54:00:1e:f7:fc}
	I1212 23:55:33.919578  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined IP address 192.168.50.124 and MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:33.919773  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .DriverName
	I1212 23:55:33.920351  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .DriverName
	I1212 23:55:33.920542  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .DriverName
	I1212 23:55:33.920654  170999 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:55:33.920703  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHHostname
	I1212 23:55:33.920755  170999 ssh_runner.go:195] Run: cat /version.json
	I1212 23:55:33.920786  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHHostname
	I1212 23:55:33.923915  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:33.924164  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:33.924257  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:f7:fc", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:52:46 +0000 UTC Type:0 Mac:52:54:00:1e:f7:fc Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:running-upgrade-279020 Clientid:01:52:54:00:1e:f7:fc}
	I1212 23:55:33.924289  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined IP address 192.168.50.124 and MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:33.924462  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHPort
	I1212 23:55:33.924667  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:f7:fc", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:52:46 +0000 UTC Type:0 Mac:52:54:00:1e:f7:fc Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:running-upgrade-279020 Clientid:01:52:54:00:1e:f7:fc}
	I1212 23:55:33.924723  170999 main.go:141] libmachine: (running-upgrade-279020) DBG | domain running-upgrade-279020 has defined IP address 192.168.50.124 and MAC address 52:54:00:1e:f7:fc in network minikube-net
	I1212 23:55:33.924732  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHKeyPath
	I1212 23:55:33.924977  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHPort
	I1212 23:55:33.925013  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHUsername
	I1212 23:55:33.925159  170999 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/running-upgrade-279020/id_rsa Username:docker}
	I1212 23:55:33.925241  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHKeyPath
	I1212 23:55:33.925416  170999 main.go:141] libmachine: (running-upgrade-279020) Calling .GetSSHUsername
	I1212 23:55:33.925542  170999 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/running-upgrade-279020/id_rsa Username:docker}
	W1212 23:55:34.020213  170999 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1212 23:55:34.020285  170999 ssh_runner.go:195] Run: systemctl --version
	I1212 23:55:34.046361  170999 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:55:34.128758  170999 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:55:34.135018  170999 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:55:34.135101  170999 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:55:34.141118  170999 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 23:55:34.141146  170999 start.go:475] detecting cgroup driver to use...
	I1212 23:55:34.141207  170999 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:55:34.160300  170999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:55:34.170028  170999 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:55:34.170096  170999 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:55:34.180185  170999 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:55:34.190035  170999 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1212 23:55:34.199281  170999 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1212 23:55:34.199350  170999 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:55:34.352302  170999 docker.go:219] disabling docker service ...
	I1212 23:55:34.352374  170999 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:55:35.375830  170999 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.023431196s)
	I1212 23:55:35.375894  170999 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:55:35.388208  170999 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:55:35.563628  170999 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:55:35.753146  170999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:55:35.776928  170999 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:55:35.802513  170999 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 23:55:35.802585  170999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:55:35.817158  170999 out.go:177] 
	W1212 23:55:35.818632  170999 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1212 23:55:35.818655  170999 out.go:239] * 
	* 
	W1212 23:55:35.819542  170999 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:55:35.821422  170999 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-279020 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-12 23:55:35.83863388 +0000 UTC m=+3659.394771423
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-279020 -n running-upgrade-279020
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-279020 -n running-upgrade-279020: exit status 4 (475.712268ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:55:36.276925  171844 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-279020" does not appear in /home/jenkins/minikube-integration/17777-136241/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-279020" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-279020" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-279020
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-279020: (1.149213113s)
--- FAIL: TestRunningBinaryUpgrade (209.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (263.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.2473781936.exe start -p stopped-upgrade-884273 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.2473781936.exe start -p stopped-upgrade-884273 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m4.078379793s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.2473781936.exe -p stopped-upgrade-884273 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.2473781936.exe -p stopped-upgrade-884273 stop: (1m32.618152686s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-884273 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1213 00:00:11.804158  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-884273 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (46.886591226s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-884273] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17777
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-884273 in cluster stopped-upgrade-884273
	* Restarting existing kvm2 VM for "stopped-upgrade-884273" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 00:00:03.091926  174977 out.go:296] Setting OutFile to fd 1 ...
	I1213 00:00:03.093105  174977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:00:03.093122  174977 out.go:309] Setting ErrFile to fd 2...
	I1213 00:00:03.093128  174977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:00:03.093327  174977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1213 00:00:03.093996  174977 out.go:303] Setting JSON to false
	I1213 00:00:03.095034  174977 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9751,"bootTime":1702415852,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 00:00:03.095100  174977 start.go:138] virtualization: kvm guest
	I1213 00:00:03.096997  174977 out.go:177] * [stopped-upgrade-884273] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 00:00:03.098906  174977 out.go:177]   - MINIKUBE_LOCATION=17777
	I1213 00:00:03.098917  174977 notify.go:220] Checking for updates...
	I1213 00:00:03.100992  174977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 00:00:03.103051  174977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:00:03.104679  174977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1213 00:00:03.106640  174977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 00:00:03.108802  174977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 00:00:03.110690  174977 config.go:182] Loaded profile config "stopped-upgrade-884273": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1213 00:00:03.110706  174977 start_flags.go:694] config upgrade: Driver=kvm2
	I1213 00:00:03.110715  174977 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401
	I1213 00:00:03.110785  174977 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/stopped-upgrade-884273/config.json ...
	I1213 00:00:03.111459  174977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:00:03.111530  174977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:00:03.127085  174977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44143
	I1213 00:00:03.127565  174977 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:00:03.128167  174977 main.go:141] libmachine: Using API Version  1
	I1213 00:00:03.128191  174977 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:00:03.128638  174977 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:00:03.128903  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .DriverName
	I1213 00:00:03.131186  174977 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1213 00:00:03.133145  174977 driver.go:392] Setting default libvirt URI to qemu:///system
	I1213 00:00:03.133579  174977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:00:03.133632  174977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:00:03.151860  174977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I1213 00:00:03.152347  174977 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:00:03.152861  174977 main.go:141] libmachine: Using API Version  1
	I1213 00:00:03.152894  174977 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:00:03.153311  174977 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:00:03.153538  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .DriverName
	I1213 00:00:03.201658  174977 out.go:177] * Using the kvm2 driver based on existing profile
	I1213 00:00:03.203302  174977 start.go:298] selected driver: kvm2
	I1213 00:00:03.203325  174977 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-884273 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.35 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1213 00:00:03.203462  174977 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 00:00:03.204419  174977 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:00:03.204550  174977 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 00:00:03.222560  174977 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1213 00:00:03.223061  174977 cni.go:84] Creating CNI manager for ""
	I1213 00:00:03.223083  174977 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1213 00:00:03.223096  174977 start_flags.go:323] config:
	{Name:stopped-upgrade-884273 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.35 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1213 00:00:03.223326  174977 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:00:03.225539  174977 out.go:177] * Starting control plane node stopped-upgrade-884273 in cluster stopped-upgrade-884273
	I1213 00:00:03.227275  174977 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1213 00:00:03.347880  174977 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1213 00:00:03.348059  174977 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/stopped-upgrade-884273/config.json ...
	I1213 00:00:03.348167  174977 cache.go:107] acquiring lock: {Name:mkc063f9cf1a956f30df032f33a365ae85cf30bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:00:03.348211  174977 cache.go:107] acquiring lock: {Name:mk7cdb9ed76fc9fcdc2a0920615162ecebc7719c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:00:03.348264  174977 cache.go:115] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 00:00:03.348277  174977 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 126.737µs
	I1213 00:00:03.348289  174977 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 00:00:03.348302  174977 cache.go:115] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1213 00:00:03.348305  174977 cache.go:107] acquiring lock: {Name:mk2cea8c48c79d635f916aa6b8522279b61b7a40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:00:03.348314  174977 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 103.616µs
	I1213 00:00:03.348328  174977 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1213 00:00:03.348345  174977 cache.go:115] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1213 00:00:03.348343  174977 cache.go:107] acquiring lock: {Name:mk7783e1487244af2295c37e59e0ec4c3c329130 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:00:03.348353  174977 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 51.522µs
	I1213 00:00:03.348371  174977 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1213 00:00:03.348380  174977 cache.go:115] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1213 00:00:03.348389  174977 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 45.844µs
	I1213 00:00:03.348397  174977 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1213 00:00:03.348394  174977 start.go:365] acquiring machines lock for stopped-upgrade-884273: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:00:03.348410  174977 cache.go:107] acquiring lock: {Name:mkf009f4f5b759ade33cc6ab092dff34de7b866c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:00:03.348417  174977 cache.go:107] acquiring lock: {Name:mk98c7c868ea535fc5c65b20b21e14556e4b41e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:00:03.348167  174977 cache.go:107] acquiring lock: {Name:mkec716019c9ae1e82965789eff0c8adc7c64400 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:00:03.348499  174977 cache.go:115] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1213 00:00:03.348518  174977 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 109.205µs
	I1213 00:00:03.348526  174977 cache.go:115] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1213 00:00:03.348540  174977 cache.go:115] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1213 00:00:03.348552  174977 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 137.484µs
	I1213 00:00:03.348569  174977 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1213 00:00:03.348555  174977 cache.go:107] acquiring lock: {Name:mk0f6e87bca244308ba30eab85c956a11b34fa55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:00:03.348537  174977 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 381.819µs
	I1213 00:00:03.348580  174977 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1213 00:00:03.348529  174977 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1213 00:00:03.348596  174977 cache.go:115] /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1213 00:00:03.348603  174977 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 51.651µs
	I1213 00:00:03.348612  174977 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1213 00:00:03.348619  174977 cache.go:87] Successfully saved all images to host disk.
	I1213 00:00:07.557536  174977 start.go:369] acquired machines lock for "stopped-upgrade-884273" in 4.209094643s
	I1213 00:00:07.557585  174977 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:00:07.557598  174977 fix.go:54] fixHost starting: minikube
	I1213 00:00:07.558034  174977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:00:07.558076  174977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:00:07.574855  174977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I1213 00:00:07.575309  174977 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:00:07.575729  174977 main.go:141] libmachine: Using API Version  1
	I1213 00:00:07.575751  174977 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:00:07.576106  174977 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:00:07.576296  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .DriverName
	I1213 00:00:07.576467  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetState
	I1213 00:00:07.578189  174977 fix.go:102] recreateIfNeeded on stopped-upgrade-884273: state=Stopped err=<nil>
	I1213 00:00:07.578216  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .DriverName
	W1213 00:00:07.578343  174977 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:00:07.580559  174977 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-884273" ...
	I1213 00:00:07.582550  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .Start
	I1213 00:00:07.582728  174977 main.go:141] libmachine: (stopped-upgrade-884273) Ensuring networks are active...
	I1213 00:00:07.583404  174977 main.go:141] libmachine: (stopped-upgrade-884273) Ensuring network default is active
	I1213 00:00:07.583810  174977 main.go:141] libmachine: (stopped-upgrade-884273) Ensuring network minikube-net is active
	I1213 00:00:07.584274  174977 main.go:141] libmachine: (stopped-upgrade-884273) Getting domain xml...
	I1213 00:00:07.584926  174977 main.go:141] libmachine: (stopped-upgrade-884273) Creating domain...
	I1213 00:00:08.894138  174977 main.go:141] libmachine: (stopped-upgrade-884273) Waiting to get IP...
	I1213 00:00:08.895191  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:08.895737  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:08.895790  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:08.895713  175056 retry.go:31] will retry after 245.153657ms: waiting for machine to come up
	I1213 00:00:09.142438  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:09.143077  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:09.143104  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:09.143041  175056 retry.go:31] will retry after 373.536763ms: waiting for machine to come up
	I1213 00:00:09.518616  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:09.519258  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:09.519291  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:09.519210  175056 retry.go:31] will retry after 382.46678ms: waiting for machine to come up
	I1213 00:00:09.904046  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:09.904762  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:09.904792  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:09.904696  175056 retry.go:31] will retry after 440.720775ms: waiting for machine to come up
	I1213 00:00:10.347359  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:10.348009  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:10.348036  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:10.347988  175056 retry.go:31] will retry after 685.20251ms: waiting for machine to come up
	I1213 00:00:11.034642  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:11.035160  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:11.035188  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:11.035114  175056 retry.go:31] will retry after 821.134444ms: waiting for machine to come up
	I1213 00:00:11.858014  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:11.858524  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:11.858555  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:11.858484  175056 retry.go:31] will retry after 786.501465ms: waiting for machine to come up
	I1213 00:00:12.647100  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:12.647771  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:12.647799  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:12.647720  175056 retry.go:31] will retry after 1.207563735s: waiting for machine to come up
	I1213 00:00:13.857004  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:13.857623  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:13.857656  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:13.857580  175056 retry.go:31] will retry after 1.748583835s: waiting for machine to come up
	I1213 00:00:15.607699  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:15.608393  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:15.608443  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:15.608332  175056 retry.go:31] will retry after 1.659471045s: waiting for machine to come up
	I1213 00:00:17.269696  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:17.270110  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:17.270136  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:17.270063  175056 retry.go:31] will retry after 2.230766076s: waiting for machine to come up
	I1213 00:00:19.502502  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:19.502972  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:19.503003  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:19.502948  175056 retry.go:31] will retry after 3.017880266s: waiting for machine to come up
	I1213 00:00:22.522365  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:22.522935  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:22.522962  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:22.522876  175056 retry.go:31] will retry after 4.315831649s: waiting for machine to come up
	I1213 00:00:26.842302  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:26.842994  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:26.843026  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:26.842940  175056 retry.go:31] will retry after 3.854223167s: waiting for machine to come up
	I1213 00:00:30.698800  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:30.699264  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:30.699290  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:30.699207  175056 retry.go:31] will retry after 6.333797324s: waiting for machine to come up
	I1213 00:00:37.037689  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:37.038255  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | unable to find current IP address of domain stopped-upgrade-884273 in network minikube-net
	I1213 00:00:37.038317  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | I1213 00:00:37.038214  175056 retry.go:31] will retry after 8.378019628s: waiting for machine to come up
	I1213 00:00:45.417831  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.418473  174977 main.go:141] libmachine: (stopped-upgrade-884273) Found IP for machine: 192.168.83.35
	I1213 00:00:45.418499  174977 main.go:141] libmachine: (stopped-upgrade-884273) Reserving static IP address...
	I1213 00:00:45.418526  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has current primary IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.419040  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "stopped-upgrade-884273", mac: "52:54:00:ac:d8:54", ip: "192.168.83.35"} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:45.419066  174977 main.go:141] libmachine: (stopped-upgrade-884273) Reserved static IP address: 192.168.83.35
	I1213 00:00:45.419089  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-884273", mac: "52:54:00:ac:d8:54", ip: "192.168.83.35"}
	I1213 00:00:45.419109  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | Getting to WaitForSSH function...
	I1213 00:00:45.419122  174977 main.go:141] libmachine: (stopped-upgrade-884273) Waiting for SSH to be available...
	I1213 00:00:45.421792  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.422200  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:d8:54", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:45.422226  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.422316  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | Using SSH client type: external
	I1213 00:00:45.422364  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/stopped-upgrade-884273/id_rsa (-rw-------)
	I1213 00:00:45.422419  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.35 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/stopped-upgrade-884273/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:00:45.422437  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | About to run SSH command:
	I1213 00:00:45.422452  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | exit 0
	I1213 00:00:45.564804  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | SSH cmd err, output: <nil>: 
	I1213 00:00:45.565277  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetConfigRaw
	I1213 00:00:45.565971  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetIP
	I1213 00:00:45.569024  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.569498  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:d8:54", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:45.569543  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.569755  174977 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/stopped-upgrade-884273/config.json ...
	I1213 00:00:45.569996  174977 machine.go:88] provisioning docker machine ...
	I1213 00:00:45.570030  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .DriverName
	I1213 00:00:45.570289  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetMachineName
	I1213 00:00:45.570463  174977 buildroot.go:166] provisioning hostname "stopped-upgrade-884273"
	I1213 00:00:45.570479  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetMachineName
	I1213 00:00:45.570653  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHHostname
	I1213 00:00:45.573052  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.573540  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:d8:54", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:45.573586  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.573704  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHPort
	I1213 00:00:45.573896  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHKeyPath
	I1213 00:00:45.574089  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHKeyPath
	I1213 00:00:45.574231  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHUsername
	I1213 00:00:45.574395  174977 main.go:141] libmachine: Using SSH client type: native
	I1213 00:00:45.574910  174977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.83.35 22 <nil> <nil>}
	I1213 00:00:45.574932  174977 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-884273 && echo "stopped-upgrade-884273" | sudo tee /etc/hostname
	I1213 00:00:45.716220  174977 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-884273
	
	I1213 00:00:45.716258  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHHostname
	I1213 00:00:45.719372  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.719796  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:d8:54", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:45.719827  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.720031  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHPort
	I1213 00:00:45.720249  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHKeyPath
	I1213 00:00:45.720381  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHKeyPath
	I1213 00:00:45.720547  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHUsername
	I1213 00:00:45.720710  174977 main.go:141] libmachine: Using SSH client type: native
	I1213 00:00:45.721043  174977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.83.35 22 <nil> <nil>}
	I1213 00:00:45.721068  174977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-884273' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-884273/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-884273' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:00:45.851021  174977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:00:45.851051  174977 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:00:45.851067  174977 buildroot.go:174] setting up certificates
	I1213 00:00:45.851121  174977 provision.go:83] configureAuth start
	I1213 00:00:45.851156  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetMachineName
	I1213 00:00:45.851484  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetIP
	I1213 00:00:45.855041  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.855434  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:d8:54", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:45.855478  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.855654  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHHostname
	I1213 00:00:45.857876  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.858220  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:d8:54", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:45.858252  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.858499  174977 provision.go:138] copyHostCerts
	I1213 00:00:45.858569  174977 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:00:45.858595  174977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:00:45.858675  174977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:00:45.858805  174977 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:00:45.858822  174977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:00:45.858862  174977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:00:45.858933  174977 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:00:45.858943  174977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:00:45.858978  174977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:00:45.859042  174977 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-884273 san=[192.168.83.35 192.168.83.35 localhost 127.0.0.1 minikube stopped-upgrade-884273]
	I1213 00:00:45.977475  174977 provision.go:172] copyRemoteCerts
	I1213 00:00:45.977550  174977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:00:45.977576  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHHostname
	I1213 00:00:45.980678  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.981177  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:d8:54", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:45.981256  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:45.981484  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHPort
	I1213 00:00:45.981688  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHKeyPath
	I1213 00:00:45.981894  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHUsername
	I1213 00:00:45.982044  174977 sshutil.go:53] new ssh client: &{IP:192.168.83.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/stopped-upgrade-884273/id_rsa Username:docker}
	I1213 00:00:46.071907  174977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:00:46.086983  174977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 00:00:46.102137  174977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:00:46.117924  174977 provision.go:86] duration metric: configureAuth took 266.76906ms
	I1213 00:00:46.117966  174977 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:00:46.118185  174977 config.go:182] Loaded profile config "stopped-upgrade-884273": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1213 00:00:46.118288  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHHostname
	I1213 00:00:46.121365  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:46.121799  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:d8:54", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:46.121833  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:46.122086  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHPort
	I1213 00:00:46.122327  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHKeyPath
	I1213 00:00:46.122546  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHKeyPath
	I1213 00:00:46.122711  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHUsername
	I1213 00:00:46.122906  174977 main.go:141] libmachine: Using SSH client type: native
	I1213 00:00:46.123373  174977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.83.35 22 <nil> <nil>}
	I1213 00:00:46.123392  174977 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:00:49.044584  174977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:00:49.044622  174977 machine.go:91] provisioned docker machine in 3.474599107s
	I1213 00:00:49.044634  174977 start.go:300] post-start starting for "stopped-upgrade-884273" (driver="kvm2")
	I1213 00:00:49.044645  174977 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:00:49.044659  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .DriverName
	I1213 00:00:49.044980  174977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:00:49.045009  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHHostname
	I1213 00:00:49.048186  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:49.048666  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:d8:54", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:49.048695  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:49.048914  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHPort
	I1213 00:00:49.049159  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHKeyPath
	I1213 00:00:49.049343  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHUsername
	I1213 00:00:49.049497  174977 sshutil.go:53] new ssh client: &{IP:192.168.83.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/stopped-upgrade-884273/id_rsa Username:docker}
	I1213 00:00:49.143247  174977 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:00:49.147517  174977 info.go:137] Remote host: Buildroot 2019.02.7
	I1213 00:00:49.147538  174977 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:00:49.147614  174977 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:00:49.147710  174977 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:00:49.147841  174977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:00:49.153121  174977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:00:49.166928  174977 start.go:303] post-start completed in 122.279255ms
	I1213 00:00:49.166954  174977 fix.go:56] fixHost completed within 41.609357392s
	I1213 00:00:49.166974  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHHostname
	I1213 00:00:49.170163  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:49.170585  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:d8:54", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:49.170620  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:49.170783  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHPort
	I1213 00:00:49.170982  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHKeyPath
	I1213 00:00:49.171124  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHKeyPath
	I1213 00:00:49.171249  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHUsername
	I1213 00:00:49.171414  174977 main.go:141] libmachine: Using SSH client type: native
	I1213 00:00:49.171837  174977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.83.35 22 <nil> <nil>}
	I1213 00:00:49.171855  174977 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1213 00:00:49.297546  174977 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702425649.234037057
	
	I1213 00:00:49.297575  174977 fix.go:206] guest clock: 1702425649.234037057
	I1213 00:00:49.297586  174977 fix.go:219] Guest: 2023-12-13 00:00:49.234037057 +0000 UTC Remote: 2023-12-13 00:00:49.166957824 +0000 UTC m=+46.133558007 (delta=67.079233ms)
	I1213 00:00:49.297607  174977 fix.go:190] guest clock delta is within tolerance: 67.079233ms
	I1213 00:00:49.297614  174977 start.go:83] releasing machines lock for "stopped-upgrade-884273", held for 41.740051824s
	I1213 00:00:49.297638  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .DriverName
	I1213 00:00:49.297937  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetIP
	I1213 00:00:49.300654  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:49.301002  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:d8:54", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:49.301031  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:49.301168  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .DriverName
	I1213 00:00:49.301856  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .DriverName
	I1213 00:00:49.302067  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .DriverName
	I1213 00:00:49.302175  174977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:00:49.302226  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHHostname
	I1213 00:00:49.302340  174977 ssh_runner.go:195] Run: cat /version.json
	I1213 00:00:49.302367  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHHostname
	I1213 00:00:49.305025  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:49.305450  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:49.305499  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:d8:54", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:49.305521  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:49.305663  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHPort
	I1213 00:00:49.305891  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHKeyPath
	I1213 00:00:49.306092  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHUsername
	I1213 00:00:49.306130  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:d8:54", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-12-13 01:00:35 +0000 UTC Type:0 Mac:52:54:00:ac:d8:54 Iaid: IPaddr:192.168.83.35 Prefix:24 Hostname:stopped-upgrade-884273 Clientid:01:52:54:00:ac:d8:54}
	I1213 00:00:49.306156  174977 main.go:141] libmachine: (stopped-upgrade-884273) DBG | domain stopped-upgrade-884273 has defined IP address 192.168.83.35 and MAC address 52:54:00:ac:d8:54 in network minikube-net
	I1213 00:00:49.306175  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHPort
	I1213 00:00:49.306249  174977 sshutil.go:53] new ssh client: &{IP:192.168.83.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/stopped-upgrade-884273/id_rsa Username:docker}
	I1213 00:00:49.306318  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHKeyPath
	I1213 00:00:49.306493  174977 main.go:141] libmachine: (stopped-upgrade-884273) Calling .GetSSHUsername
	I1213 00:00:49.306662  174977 sshutil.go:53] new ssh client: &{IP:192.168.83.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/stopped-upgrade-884273/id_rsa Username:docker}
	W1213 00:00:49.393760  174977 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1213 00:00:49.393844  174977 ssh_runner.go:195] Run: systemctl --version
	I1213 00:00:49.419108  174977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:00:49.491532  174977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:00:49.498662  174977 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:00:49.498747  174977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:00:49.504292  174977 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 00:00:49.504322  174977 start.go:475] detecting cgroup driver to use...
	I1213 00:00:49.504391  174977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:00:49.514841  174977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:00:49.523542  174977 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:00:49.523589  174977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:00:49.531190  174977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:00:49.539721  174977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1213 00:00:49.547847  174977 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1213 00:00:49.547904  174977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:00:49.650687  174977 docker.go:219] disabling docker service ...
	I1213 00:00:49.650794  174977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:00:49.663255  174977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:00:49.671569  174977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:00:49.772466  174977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:00:49.876794  174977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:00:49.885543  174977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:00:49.898097  174977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1213 00:00:49.898167  174977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:00:49.907517  174977 out.go:177] 
	W1213 00:00:49.908562  174977 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1213 00:00:49.908582  174977 out.go:239] * 
	* 
	W1213 00:00:49.909555  174977 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 00:00:49.911260  174977 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-884273 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (263.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (72.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-042245 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1212 23:57:45.320406  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-042245 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.868675108s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-042245] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17777
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-042245 in cluster pause-042245
	* Updating the running kvm2 "pause-042245" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-042245" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 23:57:44.584215  173337 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:57:44.584555  173337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:57:44.584567  173337 out.go:309] Setting ErrFile to fd 2...
	I1212 23:57:44.584575  173337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:57:44.584877  173337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1212 23:57:44.585603  173337 out.go:303] Setting JSON to false
	I1212 23:57:44.586913  173337 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9613,"bootTime":1702415852,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:57:44.587010  173337 start.go:138] virtualization: kvm guest
	I1212 23:57:44.589473  173337 out.go:177] * [pause-042245] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:57:44.591498  173337 notify.go:220] Checking for updates...
	I1212 23:57:44.591507  173337 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 23:57:44.593003  173337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:57:44.594600  173337 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:57:44.596050  173337 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:57:44.597435  173337 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:57:44.598674  173337 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:57:44.601148  173337 config.go:182] Loaded profile config "pause-042245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:57:44.601796  173337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:57:44.601871  173337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:57:44.618666  173337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37959
	I1212 23:57:44.619075  173337 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:57:44.619631  173337 main.go:141] libmachine: Using API Version  1
	I1212 23:57:44.619662  173337 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:57:44.620033  173337 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:57:44.620211  173337 main.go:141] libmachine: (pause-042245) Calling .DriverName
	I1212 23:57:44.620497  173337 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:57:44.620929  173337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:57:44.620975  173337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:57:44.637767  173337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38311
	I1212 23:57:44.638369  173337 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:57:44.638947  173337 main.go:141] libmachine: Using API Version  1
	I1212 23:57:44.638961  173337 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:57:44.639366  173337 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:57:44.639595  173337 main.go:141] libmachine: (pause-042245) Calling .DriverName
	I1212 23:57:44.678955  173337 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 23:57:44.680458  173337 start.go:298] selected driver: kvm2
	I1212 23:57:44.680478  173337 start.go:902] validating driver "kvm2" against &{Name:pause-042245 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:pause-042245 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:57:44.680651  173337 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:57:44.681069  173337 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:57:44.681145  173337 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:57:44.697354  173337 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:57:44.698414  173337 cni.go:84] Creating CNI manager for ""
	I1212 23:57:44.698436  173337 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:57:44.698453  173337 start_flags.go:323] config:
	{Name:pause-042245 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-042245 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:57:44.698717  173337 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:57:44.701593  173337 out.go:177] * Starting control plane node pause-042245 in cluster pause-042245
	I1212 23:57:44.703060  173337 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:57:44.703097  173337 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 23:57:44.703116  173337 cache.go:56] Caching tarball of preloaded images
	I1212 23:57:44.703197  173337 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:57:44.703207  173337 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 23:57:44.703346  173337 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/pause-042245/config.json ...
	I1212 23:57:44.703619  173337 start.go:365] acquiring machines lock for pause-042245: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:57:57.898365  173337 start.go:369] acquired machines lock for "pause-042245" in 13.194693191s
	I1212 23:57:57.898421  173337 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:57:57.898429  173337 fix.go:54] fixHost starting: 
	I1212 23:57:57.898842  173337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:57:57.898905  173337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:57:57.917630  173337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37165
	I1212 23:57:57.917994  173337 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:57:57.918485  173337 main.go:141] libmachine: Using API Version  1
	I1212 23:57:57.918515  173337 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:57:57.918864  173337 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:57:57.919134  173337 main.go:141] libmachine: (pause-042245) Calling .DriverName
	I1212 23:57:57.919281  173337 main.go:141] libmachine: (pause-042245) Calling .GetState
	I1212 23:57:57.921188  173337 fix.go:102] recreateIfNeeded on pause-042245: state=Running err=<nil>
	W1212 23:57:57.921208  173337 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:57:57.924003  173337 out.go:177] * Updating the running kvm2 "pause-042245" VM ...
	I1212 23:57:57.925525  173337 machine.go:88] provisioning docker machine ...
	I1212 23:57:57.925548  173337 main.go:141] libmachine: (pause-042245) Calling .DriverName
	I1212 23:57:57.925768  173337 main.go:141] libmachine: (pause-042245) Calling .GetMachineName
	I1212 23:57:57.925918  173337 buildroot.go:166] provisioning hostname "pause-042245"
	I1212 23:57:57.925934  173337 main.go:141] libmachine: (pause-042245) Calling .GetMachineName
	I1212 23:57:57.926064  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHHostname
	I1212 23:57:57.928788  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:57:57.929288  173337 main.go:141] libmachine: (pause-042245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:0b:5d", ip: ""} in network mk-pause-042245: {Iface:virbr2 ExpiryTime:2023-12-13 00:56:16 +0000 UTC Type:0 Mac:52:54:00:ec:0b:5d Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:pause-042245 Clientid:01:52:54:00:ec:0b:5d}
	I1212 23:57:57.929317  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined IP address 192.168.50.180 and MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:57:57.929573  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHPort
	I1212 23:57:57.929739  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHKeyPath
	I1212 23:57:57.929936  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHKeyPath
	I1212 23:57:57.930073  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHUsername
	I1212 23:57:57.930180  173337 main.go:141] libmachine: Using SSH client type: native
	I1212 23:57:57.930628  173337 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1212 23:57:57.930652  173337 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-042245 && echo "pause-042245" | sudo tee /etc/hostname
	I1212 23:57:58.081723  173337 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-042245
	
	I1212 23:57:58.081745  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHHostname
	I1212 23:57:58.085131  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:57:58.085549  173337 main.go:141] libmachine: (pause-042245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:0b:5d", ip: ""} in network mk-pause-042245: {Iface:virbr2 ExpiryTime:2023-12-13 00:56:16 +0000 UTC Type:0 Mac:52:54:00:ec:0b:5d Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:pause-042245 Clientid:01:52:54:00:ec:0b:5d}
	I1212 23:57:58.085604  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined IP address 192.168.50.180 and MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:57:58.085898  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHPort
	I1212 23:57:58.086172  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHKeyPath
	I1212 23:57:58.086373  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHKeyPath
	I1212 23:57:58.086567  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHUsername
	I1212 23:57:58.086783  173337 main.go:141] libmachine: Using SSH client type: native
	I1212 23:57:58.087252  173337 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1212 23:57:58.087284  173337 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-042245' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-042245/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-042245' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:57:58.222729  173337 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:57:58.222759  173337 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1212 23:57:58.222786  173337 buildroot.go:174] setting up certificates
	I1212 23:57:58.222798  173337 provision.go:83] configureAuth start
	I1212 23:57:58.222811  173337 main.go:141] libmachine: (pause-042245) Calling .GetMachineName
	I1212 23:57:58.223091  173337 main.go:141] libmachine: (pause-042245) Calling .GetIP
	I1212 23:57:58.226508  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:57:58.227185  173337 main.go:141] libmachine: (pause-042245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:0b:5d", ip: ""} in network mk-pause-042245: {Iface:virbr2 ExpiryTime:2023-12-13 00:56:16 +0000 UTC Type:0 Mac:52:54:00:ec:0b:5d Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:pause-042245 Clientid:01:52:54:00:ec:0b:5d}
	I1212 23:57:58.227214  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined IP address 192.168.50.180 and MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:57:58.227476  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHHostname
	I1212 23:57:58.231072  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:57:58.231723  173337 main.go:141] libmachine: (pause-042245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:0b:5d", ip: ""} in network mk-pause-042245: {Iface:virbr2 ExpiryTime:2023-12-13 00:56:16 +0000 UTC Type:0 Mac:52:54:00:ec:0b:5d Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:pause-042245 Clientid:01:52:54:00:ec:0b:5d}
	I1212 23:57:58.231760  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined IP address 192.168.50.180 and MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:57:58.232049  173337 provision.go:138] copyHostCerts
	I1212 23:57:58.232111  173337 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1212 23:57:58.232123  173337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1212 23:57:58.232195  173337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1212 23:57:58.232347  173337 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1212 23:57:58.232366  173337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1212 23:57:58.232403  173337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1212 23:57:58.232524  173337 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1212 23:57:58.232535  173337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1212 23:57:58.232569  173337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1212 23:57:58.232651  173337 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.pause-042245 san=[192.168.50.180 192.168.50.180 localhost 127.0.0.1 minikube pause-042245]
	I1212 23:57:58.517512  173337 provision.go:172] copyRemoteCerts
	I1212 23:57:58.517589  173337 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:57:58.517617  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHHostname
	I1212 23:57:58.521090  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:57:58.521446  173337 main.go:141] libmachine: (pause-042245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:0b:5d", ip: ""} in network mk-pause-042245: {Iface:virbr2 ExpiryTime:2023-12-13 00:56:16 +0000 UTC Type:0 Mac:52:54:00:ec:0b:5d Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:pause-042245 Clientid:01:52:54:00:ec:0b:5d}
	I1212 23:57:58.521515  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined IP address 192.168.50.180 and MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:57:58.521717  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHPort
	I1212 23:57:58.521895  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHKeyPath
	I1212 23:57:58.522039  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHUsername
	I1212 23:57:58.522316  173337 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/pause-042245/id_rsa Username:docker}
	I1212 23:57:58.634264  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:57:58.669640  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1212 23:57:58.703409  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:57:58.736158  173337 provision.go:86] duration metric: configureAuth took 513.343848ms
	I1212 23:57:58.736186  173337 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:57:58.736458  173337 config.go:182] Loaded profile config "pause-042245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:57:58.736546  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHHostname
	I1212 23:57:58.739663  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:57:58.739982  173337 main.go:141] libmachine: (pause-042245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:0b:5d", ip: ""} in network mk-pause-042245: {Iface:virbr2 ExpiryTime:2023-12-13 00:56:16 +0000 UTC Type:0 Mac:52:54:00:ec:0b:5d Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:pause-042245 Clientid:01:52:54:00:ec:0b:5d}
	I1212 23:57:58.740019  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined IP address 192.168.50.180 and MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:57:58.740238  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHPort
	I1212 23:57:58.740480  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHKeyPath
	I1212 23:57:58.740731  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHKeyPath
	I1212 23:57:58.740895  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHUsername
	I1212 23:57:58.741048  173337 main.go:141] libmachine: Using SSH client type: native
	I1212 23:57:58.741350  173337 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1212 23:57:58.741364  173337 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:58:06.177245  173337 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:58:06.177273  173337 machine.go:91] provisioned docker machine in 8.251728779s
	I1212 23:58:06.177286  173337 start.go:300] post-start starting for "pause-042245" (driver="kvm2")
	I1212 23:58:06.177299  173337 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:58:06.177333  173337 main.go:141] libmachine: (pause-042245) Calling .DriverName
	I1212 23:58:06.177754  173337 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:58:06.177792  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHHostname
	I1212 23:58:06.180897  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:58:06.181304  173337 main.go:141] libmachine: (pause-042245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:0b:5d", ip: ""} in network mk-pause-042245: {Iface:virbr2 ExpiryTime:2023-12-13 00:56:16 +0000 UTC Type:0 Mac:52:54:00:ec:0b:5d Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:pause-042245 Clientid:01:52:54:00:ec:0b:5d}
	I1212 23:58:06.181336  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined IP address 192.168.50.180 and MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:58:06.181606  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHPort
	I1212 23:58:06.181791  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHKeyPath
	I1212 23:58:06.181972  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHUsername
	I1212 23:58:06.182133  173337 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/pause-042245/id_rsa Username:docker}
	I1212 23:58:06.471614  173337 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:58:06.504833  173337 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:58:06.504868  173337 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1212 23:58:06.504952  173337 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1212 23:58:06.505069  173337 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1212 23:58:06.505209  173337 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:58:06.630549  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:58:06.729267  173337 start.go:303] post-start completed in 551.963053ms
	I1212 23:58:06.729300  173337 fix.go:56] fixHost completed within 8.830864645s
	I1212 23:58:06.729324  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHHostname
	I1212 23:58:06.732488  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:58:06.732920  173337 main.go:141] libmachine: (pause-042245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:0b:5d", ip: ""} in network mk-pause-042245: {Iface:virbr2 ExpiryTime:2023-12-13 00:56:16 +0000 UTC Type:0 Mac:52:54:00:ec:0b:5d Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:pause-042245 Clientid:01:52:54:00:ec:0b:5d}
	I1212 23:58:06.732954  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined IP address 192.168.50.180 and MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:58:06.733146  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHPort
	I1212 23:58:06.733356  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHKeyPath
	I1212 23:58:06.733557  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHKeyPath
	I1212 23:58:06.733714  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHUsername
	I1212 23:58:06.733882  173337 main.go:141] libmachine: Using SSH client type: native
	I1212 23:58:06.734184  173337 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.180 22 <nil> <nil>}
	I1212 23:58:06.734195  173337 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 23:58:06.916869  173337 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702425486.912007051
	
	I1212 23:58:06.916894  173337 fix.go:206] guest clock: 1702425486.912007051
	I1212 23:58:06.916901  173337 fix.go:219] Guest: 2023-12-12 23:58:06.912007051 +0000 UTC Remote: 2023-12-12 23:58:06.729304126 +0000 UTC m=+22.208014637 (delta=182.702925ms)
	I1212 23:58:06.916919  173337 fix.go:190] guest clock delta is within tolerance: 182.702925ms
	I1212 23:58:06.916923  173337 start.go:83] releasing machines lock for "pause-042245", held for 9.018526395s
	I1212 23:58:06.916944  173337 main.go:141] libmachine: (pause-042245) Calling .DriverName
	I1212 23:58:06.917218  173337 main.go:141] libmachine: (pause-042245) Calling .GetIP
	I1212 23:58:06.919917  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:58:06.920370  173337 main.go:141] libmachine: (pause-042245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:0b:5d", ip: ""} in network mk-pause-042245: {Iface:virbr2 ExpiryTime:2023-12-13 00:56:16 +0000 UTC Type:0 Mac:52:54:00:ec:0b:5d Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:pause-042245 Clientid:01:52:54:00:ec:0b:5d}
	I1212 23:58:06.920399  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined IP address 192.168.50.180 and MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:58:06.920570  173337 main.go:141] libmachine: (pause-042245) Calling .DriverName
	I1212 23:58:06.921069  173337 main.go:141] libmachine: (pause-042245) Calling .DriverName
	I1212 23:58:06.921346  173337 main.go:141] libmachine: (pause-042245) Calling .DriverName
	I1212 23:58:06.921437  173337 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:58:06.921478  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHHostname
	I1212 23:58:06.921588  173337 ssh_runner.go:195] Run: cat /version.json
	I1212 23:58:06.921614  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHHostname
	I1212 23:58:06.924334  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:58:06.924466  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:58:06.924714  173337 main.go:141] libmachine: (pause-042245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:0b:5d", ip: ""} in network mk-pause-042245: {Iface:virbr2 ExpiryTime:2023-12-13 00:56:16 +0000 UTC Type:0 Mac:52:54:00:ec:0b:5d Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:pause-042245 Clientid:01:52:54:00:ec:0b:5d}
	I1212 23:58:06.924753  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined IP address 192.168.50.180 and MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:58:06.924782  173337 main.go:141] libmachine: (pause-042245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:0b:5d", ip: ""} in network mk-pause-042245: {Iface:virbr2 ExpiryTime:2023-12-13 00:56:16 +0000 UTC Type:0 Mac:52:54:00:ec:0b:5d Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:pause-042245 Clientid:01:52:54:00:ec:0b:5d}
	I1212 23:58:06.924799  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined IP address 192.168.50.180 and MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:58:06.924958  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHPort
	I1212 23:58:06.925115  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHPort
	I1212 23:58:06.925123  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHKeyPath
	I1212 23:58:06.925300  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHKeyPath
	I1212 23:58:06.925302  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHUsername
	I1212 23:58:06.925466  173337 main.go:141] libmachine: (pause-042245) Calling .GetSSHUsername
	I1212 23:58:06.925480  173337 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/pause-042245/id_rsa Username:docker}
	I1212 23:58:06.925615  173337 sshutil.go:53] new ssh client: &{IP:192.168.50.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/pause-042245/id_rsa Username:docker}
	I1212 23:58:07.029926  173337 ssh_runner.go:195] Run: systemctl --version
	I1212 23:58:07.068852  173337 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:58:07.250024  173337 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:58:07.264028  173337 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:58:07.264111  173337 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:58:07.285915  173337 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 23:58:07.286018  173337 start.go:475] detecting cgroup driver to use...
	I1212 23:58:07.286113  173337 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:58:07.313458  173337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:58:07.347423  173337 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:58:07.347495  173337 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:58:07.371351  173337 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:58:07.428201  173337 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:58:07.710786  173337 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:58:07.967345  173337 docker.go:219] disabling docker service ...
	I1212 23:58:07.967439  173337 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:58:08.003955  173337 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:58:08.035441  173337 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:58:08.370475  173337 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:58:08.693738  173337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:58:08.726720  173337 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:58:08.794264  173337 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:58:08.794336  173337 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:58:08.824236  173337 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:58:08.824318  173337 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:58:08.838922  173337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:58:08.852363  173337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:58:08.864642  173337 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:58:08.877542  173337 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:58:08.889271  173337 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:58:08.899876  173337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:58:09.113174  173337 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:58:10.633232  173337 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.520014182s)
	I1212 23:58:10.633265  173337 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:58:10.633317  173337 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:58:10.640062  173337 start.go:543] Will wait 60s for crictl version
	I1212 23:58:10.640164  173337 ssh_runner.go:195] Run: which crictl
	I1212 23:58:10.644976  173337 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:58:10.690633  173337 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:58:10.690723  173337 ssh_runner.go:195] Run: crio --version
	I1212 23:58:10.745521  173337 ssh_runner.go:195] Run: crio --version
	I1212 23:58:10.794654  173337 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:58:10.796043  173337 main.go:141] libmachine: (pause-042245) Calling .GetIP
	I1212 23:58:10.799005  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:58:10.799365  173337 main.go:141] libmachine: (pause-042245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:0b:5d", ip: ""} in network mk-pause-042245: {Iface:virbr2 ExpiryTime:2023-12-13 00:56:16 +0000 UTC Type:0 Mac:52:54:00:ec:0b:5d Iaid: IPaddr:192.168.50.180 Prefix:24 Hostname:pause-042245 Clientid:01:52:54:00:ec:0b:5d}
	I1212 23:58:10.799395  173337 main.go:141] libmachine: (pause-042245) DBG | domain pause-042245 has defined IP address 192.168.50.180 and MAC address 52:54:00:ec:0b:5d in network mk-pause-042245
	I1212 23:58:10.799588  173337 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 23:58:10.804007  173337 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:58:10.804111  173337 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:58:10.861475  173337 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:58:10.861498  173337 crio.go:415] Images already preloaded, skipping extraction
	I1212 23:58:10.861545  173337 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:58:10.900408  173337 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:58:10.900460  173337 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:58:10.900538  173337 ssh_runner.go:195] Run: crio config
	I1212 23:58:10.964404  173337 cni.go:84] Creating CNI manager for ""
	I1212 23:58:10.964427  173337 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:58:10.964476  173337 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:58:10.964503  173337 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.180 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-042245 NodeName:pause-042245 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:58:10.964746  173337 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-042245"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:58:10.964853  173337 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-042245 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-042245 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:58:10.964935  173337 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:58:10.974020  173337 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:58:10.974109  173337 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:58:10.982995  173337 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1212 23:58:11.000377  173337 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:58:11.018624  173337 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1212 23:58:11.036174  173337 ssh_runner.go:195] Run: grep 192.168.50.180	control-plane.minikube.internal$ /etc/hosts
	I1212 23:58:11.040718  173337 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/pause-042245 for IP: 192.168.50.180
	I1212 23:58:11.040759  173337 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:58:11.040956  173337 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1212 23:58:11.041004  173337 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1212 23:58:11.041078  173337 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/pause-042245/client.key
	I1212 23:58:11.041146  173337 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/pause-042245/apiserver.key.be826a61
	I1212 23:58:11.041187  173337 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/pause-042245/proxy-client.key
	I1212 23:58:11.041298  173337 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1212 23:58:11.041327  173337 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1212 23:58:11.041339  173337 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:58:11.041371  173337 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:58:11.041396  173337 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:58:11.041426  173337 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1212 23:58:11.041471  173337 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1212 23:58:11.042142  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/pause-042245/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:58:11.072293  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/pause-042245/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:58:11.102517  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/pause-042245/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:58:11.132319  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/pause-042245/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:58:11.157884  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:58:11.182182  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 23:58:11.205167  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:58:11.231194  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 23:58:11.256571  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1212 23:58:11.286327  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1212 23:58:11.309774  173337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:58:11.342956  173337 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:58:11.531650  173337 ssh_runner.go:195] Run: openssl version
	I1212 23:58:11.538770  173337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1212 23:58:11.571280  173337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1212 23:58:11.609705  173337 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1212 23:58:11.609820  173337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1212 23:58:11.643274  173337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1212 23:58:11.799853  173337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1212 23:58:11.835822  173337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1212 23:58:11.859306  173337 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1212 23:58:11.859402  173337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1212 23:58:11.872167  173337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:58:11.897898  173337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:58:11.930031  173337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:58:11.940693  173337 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:58:11.940754  173337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:58:11.951358  173337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:58:11.971758  173337 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:58:11.985383  173337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:58:12.001231  173337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:58:12.015415  173337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:58:12.028265  173337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:58:12.039205  173337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:58:12.050324  173337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:58:12.057290  173337 kubeadm.go:404] StartCluster: {Name:pause-042245 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:pause-042245 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:58:12.057434  173337 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:58:12.057526  173337 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:58:12.116749  173337 cri.go:89] found id: "465e43015dc4511b642dfd09115f386f9aef0a975b7d501e2cc929e53309e2c4"
	I1212 23:58:12.116784  173337 cri.go:89] found id: "c6a36b420e7f35d9a32c66c6bacce0d1e7c275986d2859923f713b9abc272f94"
	I1212 23:58:12.116791  173337 cri.go:89] found id: "93b1d43ce71f1fe97981104945bb7cfae5f8473934dafeb27fad8af562d64b56"
	I1212 23:58:12.116796  173337 cri.go:89] found id: "d00d0e32c8c663f5f5de60451635e5b36869ddfb257bb8e780346779230adf10"
	I1212 23:58:12.116801  173337 cri.go:89] found id: "e820dc39eeab961e97aa57cb49017b1ac358db796eeb04c1fb2c5d3502e798c0"
	I1212 23:58:12.116807  173337 cri.go:89] found id: "6c74509fe23dc63af0c9a70eaba0301f0cea660c49c7825a31a938677dc03484"
	I1212 23:58:12.116812  173337 cri.go:89] found id: ""
	I1212 23:58:12.116864  173337 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-042245 -n pause-042245
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-042245 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-042245 logs -n 25: (1.519959395s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-269833 sudo           | NoKubernetes-269833       | jenkins | v1.32.0 | 12 Dec 23 23:54 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-269833                | NoKubernetes-269833       | jenkins | v1.32.0 | 12 Dec 23 23:54 UTC | 12 Dec 23 23:54 UTC |
	| start   | -p NoKubernetes-269833                | NoKubernetes-269833       | jenkins | v1.32.0 | 12 Dec 23 23:54 UTC | 12 Dec 23 23:55 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-279020             | running-upgrade-279020    | jenkins | v1.32.0 | 12 Dec 23 23:54 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-527166 ssh cat     | force-systemd-flag-527166 | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:55 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-527166          | force-systemd-flag-527166 | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:55 UTC |
	| start   | -p cert-options-643716                | cert-options-643716       | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:56 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-269833 sudo           | NoKubernetes-269833       | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-269833                | NoKubernetes-269833       | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:55 UTC |
	| start   | -p pause-042245 --memory=2048         | pause-042245              | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:57 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-279020             | running-upgrade-279020    | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:55 UTC |
	| start   | -p kubernetes-upgrade-961264          | kubernetes-upgrade-961264 | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:57 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-643716 ssh               | cert-options-643716       | jenkins | v1.32.0 | 12 Dec 23 23:56 UTC | 12 Dec 23 23:56 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-643716 -- sudo        | cert-options-643716       | jenkins | v1.32.0 | 12 Dec 23 23:56 UTC | 12 Dec 23 23:56 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-643716                | cert-options-643716       | jenkins | v1.32.0 | 12 Dec 23 23:56 UTC | 12 Dec 23 23:56 UTC |
	| stop    | -p kubernetes-upgrade-961264          | kubernetes-upgrade-961264 | jenkins | v1.32.0 | 12 Dec 23 23:57 UTC | 12 Dec 23 23:57 UTC |
	| start   | -p kubernetes-upgrade-961264          | kubernetes-upgrade-961264 | jenkins | v1.32.0 | 12 Dec 23 23:57 UTC | 12 Dec 23 23:58 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-042245                       | pause-042245              | jenkins | v1.32.0 | 12 Dec 23 23:57 UTC | 12 Dec 23 23:58 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-961264          | kubernetes-upgrade-961264 | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-961264          | kubernetes-upgrade-961264 | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-380248             | cert-expiration-380248    | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-961264          | kubernetes-upgrade-961264 | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p old-k8s-version-508612             | old-k8s-version-508612    | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-380248             | cert-expiration-380248    | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p no-preload-143586                  | no-preload-143586         | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC |                     |
	|         | --memory=2200 --alsologtostderr       |                           |         |         |                     |                     |
	|         | --wait=true --preload=false           |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:58:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:58:53.422855  174148 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:58:53.423065  174148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:58:53.423075  174148 out.go:309] Setting ErrFile to fd 2...
	I1212 23:58:53.423080  174148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:58:53.423254  174148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1212 23:58:53.423863  174148 out.go:303] Setting JSON to false
	I1212 23:58:53.424907  174148 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9682,"bootTime":1702415852,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:58:53.424970  174148 start.go:138] virtualization: kvm guest
	I1212 23:58:53.427198  174148 out.go:177] * [no-preload-143586] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:58:53.428514  174148 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 23:58:53.428523  174148 notify.go:220] Checking for updates...
	I1212 23:58:53.430025  174148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:58:53.431669  174148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:58:53.432867  174148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:58:53.434162  174148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:58:53.435460  174148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:56:12 UTC, ends at Tue 2023-12-12 23:58:53 UTC. --
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.773817676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702425533773804599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=3d139e0b-1bb6-4b3e-9023-ce91f69d0933 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.774373840Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0ebae5e1-97d6-4152-8eb6-1d0d5ed06a9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.774443588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0ebae5e1-97d6-4152-8eb6-1d0d5ed06a9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.774765648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a27bdab34c1c629010c827d9df542c9be0d2d0297b2e03f2337be318f7cc6de,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702425514577512980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a732f0e328bf909dfa49503a2addefec8eb416db8eef10b7fa450e247ea45fd,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702425514554891660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d0e1fdef57ae16df2e2939e564f3ab50007a8607d63c1318cd92380d94b707,PodSandboxId:4ed5ed97859efd418e5556ef55f9522f69b1dafa2983654d05e0253198dd3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702425509947123121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b96259
2fade2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9c90b399b005633bddde8d6b7cea7173c430de34757f66b14ba1e265e2c642,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702425509900677371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 1bd6e647,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d9aa8939e48d5654fb6bcc6fc82ff15e13fb4003570c493118b3771bb0ecf80,PodSandboxId:70626acd90d13cf121a18a7ca71721a022a059bf958bc6d8a055ec90e865f095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702425509916916038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.
container.hash: f05701c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a71083b64960e98d2e51ee2354a989a374466df0f0a6df79d4455accba113cd3,PodSandboxId:8031b02c8351e2485bdeb595f492719fb4d195ea926e2366ace65d5fde158d87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702425506940903618,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b43bd6ba4ffe03fd75fc55d07c3f4dbc2a445441a1dd96b4dcb7496f6090a2,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1702425500996829568,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd6e647,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf31ac96f345cc97036ac206292af5a9c1b99b1f603dc555e52a0d8c4792b3a,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1702425493252974760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a00c45deaf0619238dcc9e59234b513812432a35ce65dffa76266028010dc2,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1702425493218682553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465e43015dc4511b642dfd09115f386f9aef0a975b7d501e2cc929e53309e2c4,PodSandboxId:e9fe18a2c8c9073a66f63e48cb62fe8663704af310ebe90607d2a7a02ee84976,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1702425488562610142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b962592fa
de2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b1d43ce71f1fe97981104945bb7cfae5f8473934dafeb27fad8af562d64b56,PodSandboxId:44e9150340683c0a25257706e5b602d733f385dc459c01a6714a892d840ee5c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1702425487744790198,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00d0e32c8c663f5f5de60451635e5b36869ddfb257bb8e780346779230adf10,PodSandboxId:27c183a6175b643d3e50205b5ef6584179d242ad935a531db3ab96389752331d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1702425487415222608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.container.hash
: f05701c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0ebae5e1-97d6-4152-8eb6-1d0d5ed06a9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.819527780Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b55c343e-b4b9-47dd-8e46-d5e4b4c4ffd3 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.819618945Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b55c343e-b4b9-47dd-8e46-d5e4b4c4ffd3 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.821107533Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1a21aa3a-951d-4c1e-ada2-6fe4d0e9b2f3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.821606191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702425533821590387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=1a21aa3a-951d-4c1e-ada2-6fe4d0e9b2f3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.822402247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=933b0b62-4c8e-4665-94fc-a6d6c80b69d6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.822483139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=933b0b62-4c8e-4665-94fc-a6d6c80b69d6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.822905940Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a27bdab34c1c629010c827d9df542c9be0d2d0297b2e03f2337be318f7cc6de,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702425514577512980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a732f0e328bf909dfa49503a2addefec8eb416db8eef10b7fa450e247ea45fd,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702425514554891660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d0e1fdef57ae16df2e2939e564f3ab50007a8607d63c1318cd92380d94b707,PodSandboxId:4ed5ed97859efd418e5556ef55f9522f69b1dafa2983654d05e0253198dd3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702425509947123121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b96259
2fade2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9c90b399b005633bddde8d6b7cea7173c430de34757f66b14ba1e265e2c642,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702425509900677371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 1bd6e647,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d9aa8939e48d5654fb6bcc6fc82ff15e13fb4003570c493118b3771bb0ecf80,PodSandboxId:70626acd90d13cf121a18a7ca71721a022a059bf958bc6d8a055ec90e865f095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702425509916916038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.
container.hash: f05701c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a71083b64960e98d2e51ee2354a989a374466df0f0a6df79d4455accba113cd3,PodSandboxId:8031b02c8351e2485bdeb595f492719fb4d195ea926e2366ace65d5fde158d87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702425506940903618,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b43bd6ba4ffe03fd75fc55d07c3f4dbc2a445441a1dd96b4dcb7496f6090a2,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1702425500996829568,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd6e647,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf31ac96f345cc97036ac206292af5a9c1b99b1f603dc555e52a0d8c4792b3a,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1702425493252974760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a00c45deaf0619238dcc9e59234b513812432a35ce65dffa76266028010dc2,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1702425493218682553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465e43015dc4511b642dfd09115f386f9aef0a975b7d501e2cc929e53309e2c4,PodSandboxId:e9fe18a2c8c9073a66f63e48cb62fe8663704af310ebe90607d2a7a02ee84976,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1702425488562610142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b962592fa
de2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b1d43ce71f1fe97981104945bb7cfae5f8473934dafeb27fad8af562d64b56,PodSandboxId:44e9150340683c0a25257706e5b602d733f385dc459c01a6714a892d840ee5c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1702425487744790198,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00d0e32c8c663f5f5de60451635e5b36869ddfb257bb8e780346779230adf10,PodSandboxId:27c183a6175b643d3e50205b5ef6584179d242ad935a531db3ab96389752331d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1702425487415222608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.container.hash
: f05701c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=933b0b62-4c8e-4665-94fc-a6d6c80b69d6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.869425935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=afdd0169-91e3-4edd-8fe8-ec2a1974a72e name=/runtime.v1.RuntimeService/Version
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.869545724Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=afdd0169-91e3-4edd-8fe8-ec2a1974a72e name=/runtime.v1.RuntimeService/Version
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.870854815Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f6e1864f-8894-4021-839f-190fb3d75b1c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.871220421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702425533871207906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=f6e1864f-8894-4021-839f-190fb3d75b1c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.871860722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5b6c8766-530f-46ee-9599-0cd1dcbf7dde name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.871950099Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5b6c8766-530f-46ee-9599-0cd1dcbf7dde name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.872260489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a27bdab34c1c629010c827d9df542c9be0d2d0297b2e03f2337be318f7cc6de,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702425514577512980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a732f0e328bf909dfa49503a2addefec8eb416db8eef10b7fa450e247ea45fd,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702425514554891660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d0e1fdef57ae16df2e2939e564f3ab50007a8607d63c1318cd92380d94b707,PodSandboxId:4ed5ed97859efd418e5556ef55f9522f69b1dafa2983654d05e0253198dd3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702425509947123121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b96259
2fade2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9c90b399b005633bddde8d6b7cea7173c430de34757f66b14ba1e265e2c642,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702425509900677371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 1bd6e647,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d9aa8939e48d5654fb6bcc6fc82ff15e13fb4003570c493118b3771bb0ecf80,PodSandboxId:70626acd90d13cf121a18a7ca71721a022a059bf958bc6d8a055ec90e865f095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702425509916916038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.
container.hash: f05701c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a71083b64960e98d2e51ee2354a989a374466df0f0a6df79d4455accba113cd3,PodSandboxId:8031b02c8351e2485bdeb595f492719fb4d195ea926e2366ace65d5fde158d87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702425506940903618,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b43bd6ba4ffe03fd75fc55d07c3f4dbc2a445441a1dd96b4dcb7496f6090a2,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1702425500996829568,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd6e647,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf31ac96f345cc97036ac206292af5a9c1b99b1f603dc555e52a0d8c4792b3a,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1702425493252974760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a00c45deaf0619238dcc9e59234b513812432a35ce65dffa76266028010dc2,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1702425493218682553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465e43015dc4511b642dfd09115f386f9aef0a975b7d501e2cc929e53309e2c4,PodSandboxId:e9fe18a2c8c9073a66f63e48cb62fe8663704af310ebe90607d2a7a02ee84976,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1702425488562610142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b962592fa
de2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b1d43ce71f1fe97981104945bb7cfae5f8473934dafeb27fad8af562d64b56,PodSandboxId:44e9150340683c0a25257706e5b602d733f385dc459c01a6714a892d840ee5c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1702425487744790198,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00d0e32c8c663f5f5de60451635e5b36869ddfb257bb8e780346779230adf10,PodSandboxId:27c183a6175b643d3e50205b5ef6584179d242ad935a531db3ab96389752331d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1702425487415222608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.container.hash
: f05701c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5b6c8766-530f-46ee-9599-0cd1dcbf7dde name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.926179377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f27af4c0-b109-496b-ad8a-bf48e263bf03 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.926342794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f27af4c0-b109-496b-ad8a-bf48e263bf03 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.927606792Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fae13a65-fa8e-4fa3-bc04-441e58c17226 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.928211479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702425533928196559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=fae13a65-fa8e-4fa3-bc04-441e58c17226 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.929068236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b34317b5-11f5-4e5d-973c-dd43ddf4d5b9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.929165511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b34317b5-11f5-4e5d-973c-dd43ddf4d5b9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:53 pause-042245 crio[2509]: time="2023-12-12 23:58:53.929518747Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a27bdab34c1c629010c827d9df542c9be0d2d0297b2e03f2337be318f7cc6de,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702425514577512980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a732f0e328bf909dfa49503a2addefec8eb416db8eef10b7fa450e247ea45fd,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702425514554891660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d0e1fdef57ae16df2e2939e564f3ab50007a8607d63c1318cd92380d94b707,PodSandboxId:4ed5ed97859efd418e5556ef55f9522f69b1dafa2983654d05e0253198dd3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702425509947123121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b96259
2fade2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9c90b399b005633bddde8d6b7cea7173c430de34757f66b14ba1e265e2c642,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702425509900677371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 1bd6e647,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d9aa8939e48d5654fb6bcc6fc82ff15e13fb4003570c493118b3771bb0ecf80,PodSandboxId:70626acd90d13cf121a18a7ca71721a022a059bf958bc6d8a055ec90e865f095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702425509916916038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.
container.hash: f05701c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a71083b64960e98d2e51ee2354a989a374466df0f0a6df79d4455accba113cd3,PodSandboxId:8031b02c8351e2485bdeb595f492719fb4d195ea926e2366ace65d5fde158d87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702425506940903618,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b43bd6ba4ffe03fd75fc55d07c3f4dbc2a445441a1dd96b4dcb7496f6090a2,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1702425500996829568,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd6e647,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf31ac96f345cc97036ac206292af5a9c1b99b1f603dc555e52a0d8c4792b3a,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1702425493252974760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a00c45deaf0619238dcc9e59234b513812432a35ce65dffa76266028010dc2,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1702425493218682553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465e43015dc4511b642dfd09115f386f9aef0a975b7d501e2cc929e53309e2c4,PodSandboxId:e9fe18a2c8c9073a66f63e48cb62fe8663704af310ebe90607d2a7a02ee84976,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1702425488562610142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b962592fa
de2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b1d43ce71f1fe97981104945bb7cfae5f8473934dafeb27fad8af562d64b56,PodSandboxId:44e9150340683c0a25257706e5b602d733f385dc459c01a6714a892d840ee5c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1702425487744790198,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00d0e32c8c663f5f5de60451635e5b36869ddfb257bb8e780346779230adf10,PodSandboxId:27c183a6175b643d3e50205b5ef6584179d242ad935a531db3ab96389752331d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1702425487415222608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.container.hash
: f05701c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b34317b5-11f5-4e5d-973c-dd43ddf4d5b9 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8a27bdab34c1c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   19 seconds ago      Running             kube-proxy                2                   d1f4963148919       kube-proxy-nk6dp
	9a732f0e328bf       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   19 seconds ago      Running             coredns                   2                   bd7d60c4dcf55       coredns-5dd5756b68-6fff5
	57d0e1fdef57a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   24 seconds ago      Running             kube-scheduler            2                   4ed5ed97859ef       kube-scheduler-pause-042245
	5d9aa8939e48d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   24 seconds ago      Running             kube-apiserver            2                   70626acd90d13       kube-apiserver-pause-042245
	9a9c90b399b00       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   24 seconds ago      Running             etcd                      3                   c3ce3c7ac7cbd       etcd-pause-042245
	a71083b64960e       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   27 seconds ago      Running             kube-controller-manager   2                   8031b02c8351e       kube-controller-manager-pause-042245
	d4b43bd6ba4ff       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   33 seconds ago      Exited              etcd                      2                   c3ce3c7ac7cbd       etcd-pause-042245
	caf31ac96f345       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   40 seconds ago      Exited              kube-proxy                1                   d1f4963148919       kube-proxy-nk6dp
	31a00c45deaf0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   40 seconds ago      Exited              coredns                   1                   bd7d60c4dcf55       coredns-5dd5756b68-6fff5
	465e43015dc45       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   45 seconds ago      Exited              kube-scheduler            1                   e9fe18a2c8c90       kube-scheduler-pause-042245
	93b1d43ce71f1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   46 seconds ago      Exited              kube-controller-manager   1                   44e9150340683       kube-controller-manager-pause-042245
	d00d0e32c8c66       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   46 seconds ago      Exited              kube-apiserver            1                   27c183a6175b6       kube-apiserver-pause-042245
	
	* 
	* ==> coredns [31a00c45deaf0619238dcc9e59234b513812432a35ce65dffa76266028010dc2] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56651 - 46325 "HINFO IN 2724357454419950740.2190635231988653957. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015345135s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [9a732f0e328bf909dfa49503a2addefec8eb416db8eef10b7fa450e247ea45fd] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42822 - 43644 "HINFO IN 3822534135260605557.6236460414960743442. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013738654s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-042245
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-042245
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=pause-042245
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_56_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:56:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-042245
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:58:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:58:34 +0000   Tue, 12 Dec 2023 23:56:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:58:34 +0000   Tue, 12 Dec 2023 23:56:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:58:34 +0000   Tue, 12 Dec 2023 23:56:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:58:34 +0000   Tue, 12 Dec 2023 23:56:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.180
	  Hostname:    pause-042245
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e38736152684fb08ad1e1a95efed320
	  System UUID:                0e387361-5268-4fb0-8ad1-e1a95efed320
	  Boot ID:                    d92c0287-811d-4e19-a513-66dbc8d5e161
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-6fff5                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     114s
	  kube-system                 etcd-pause-042245                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-apiserver-pause-042245             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-controller-manager-pause-042245    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-proxy-nk6dp                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-scheduler-pause-042245             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 2m9s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s               kubelet          Node pause-042245 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s               kubelet          Node pause-042245 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s               kubelet          Node pause-042245 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m8s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m8s               kubelet          Node pause-042245 status is now: NodeReady
	  Normal  RegisteredNode           117s               node-controller  Node pause-042245 event: Registered Node pause-042245 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-042245 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-042245 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-042245 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node pause-042245 event: Registered Node pause-042245 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec12 23:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070375] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.668648] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.738290] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.159906] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.130540] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.973942] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.117412] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.146193] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.138944] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.280855] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +10.892627] systemd-fstab-generator[918]: Ignoring "noauto" for root device
	[  +9.302919] systemd-fstab-generator[1254]: Ignoring "noauto" for root device
	[Dec12 23:57] kauditd_printk_skb: 19 callbacks suppressed
	[Dec12 23:58] systemd-fstab-generator[2235]: Ignoring "noauto" for root device
	[  +0.275697] systemd-fstab-generator[2275]: Ignoring "noauto" for root device
	[  +0.332221] systemd-fstab-generator[2307]: Ignoring "noauto" for root device
	[  +0.363065] systemd-fstab-generator[2347]: Ignoring "noauto" for root device
	[  +0.459712] systemd-fstab-generator[2398]: Ignoring "noauto" for root device
	[ +19.972601] systemd-fstab-generator[3327]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [9a9c90b399b005633bddde8d6b7cea7173c430de34757f66b14ba1e265e2c642] <==
	* {"level":"info","ts":"2023-12-12T23:58:30.912905Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:58:30.912944Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:58:30.913219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 switched to configuration voters=(3934897292032928695)"}
	{"level":"info","ts":"2023-12-12T23:58:30.913585Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"81b4a4bc8c2c313","local-member-id":"369b903d3744ebb7","added-peer-id":"369b903d3744ebb7","added-peer-peer-urls":["https://192.168.50.180:2380"]}
	{"level":"info","ts":"2023-12-12T23:58:30.913794Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"81b4a4bc8c2c313","local-member-id":"369b903d3744ebb7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:58:30.913889Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:58:30.919076Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T23:58:30.919209Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.180:2380"}
	{"level":"info","ts":"2023-12-12T23:58:30.919395Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.180:2380"}
	{"level":"info","ts":"2023-12-12T23:58:30.920756Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T23:58:30.920685Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"369b903d3744ebb7","initial-advertise-peer-urls":["https://192.168.50.180:2380"],"listen-peer-urls":["https://192.168.50.180:2380"],"advertise-client-urls":["https://192.168.50.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T23:58:32.595838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:32.595944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:32.595981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 received MsgPreVoteResp from 369b903d3744ebb7 at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:32.59601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 became candidate at term 4"}
	{"level":"info","ts":"2023-12-12T23:58:32.596034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 received MsgVoteResp from 369b903d3744ebb7 at term 4"}
	{"level":"info","ts":"2023-12-12T23:58:32.596061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 became leader at term 4"}
	{"level":"info","ts":"2023-12-12T23:58:32.596091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 369b903d3744ebb7 elected leader 369b903d3744ebb7 at term 4"}
	{"level":"info","ts":"2023-12-12T23:58:32.5979Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"369b903d3744ebb7","local-member-attributes":"{Name:pause-042245 ClientURLs:[https://192.168.50.180:2379]}","request-path":"/0/members/369b903d3744ebb7/attributes","cluster-id":"81b4a4bc8c2c313","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:58:32.598219Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:58:32.598261Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:58:32.598364Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:58:32.598507Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:58:32.599588Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.180:2379"}
	{"level":"info","ts":"2023-12-12T23:58:32.59961Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [d4b43bd6ba4ffe03fd75fc55d07c3f4dbc2a445441a1dd96b4dcb7496f6090a2] <==
	* {"level":"info","ts":"2023-12-12T23:58:21.670511Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.180:2380"}
	{"level":"info","ts":"2023-12-12T23:58:22.050859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T23:58:22.050938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:58:22.050975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 received MsgPreVoteResp from 369b903d3744ebb7 at term 2"}
	{"level":"info","ts":"2023-12-12T23:58:22.051003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:22.051011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 received MsgVoteResp from 369b903d3744ebb7 at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:22.051023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 became leader at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:22.051066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 369b903d3744ebb7 elected leader 369b903d3744ebb7 at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:22.056552Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"369b903d3744ebb7","local-member-attributes":"{Name:pause-042245 ClientURLs:[https://192.168.50.180:2379]}","request-path":"/0/members/369b903d3744ebb7/attributes","cluster-id":"81b4a4bc8c2c313","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:58:22.056563Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:58:22.056698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:58:22.057887Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:58:22.058256Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.180:2379"}
	{"level":"info","ts":"2023-12-12T23:58:22.058745Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:58:22.058792Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:58:22.373093Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-12T23:58:22.373221Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-042245","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.180:2380"],"advertise-client-urls":["https://192.168.50.180:2379"]}
	{"level":"warn","ts":"2023-12-12T23:58:22.373425Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T23:58:22.373481Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T23:58:22.375198Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.180:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T23:58:22.37526Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.180:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-12T23:58:22.375476Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"369b903d3744ebb7","current-leader-member-id":"369b903d3744ebb7"}
	{"level":"info","ts":"2023-12-12T23:58:22.379409Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.180:2380"}
	{"level":"info","ts":"2023-12-12T23:58:22.379566Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.180:2380"}
	{"level":"info","ts":"2023-12-12T23:58:22.37962Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-042245","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.180:2380"],"advertise-client-urls":["https://192.168.50.180:2379"]}
	
	* 
	* ==> kernel <==
	*  23:58:54 up 2 min,  0 users,  load average: 2.86, 1.22, 0.46
	Linux pause-042245 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5d9aa8939e48d5654fb6bcc6fc82ff15e13fb4003570c493118b3771bb0ecf80] <==
	* I1212 23:58:33.967404       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I1212 23:58:33.969112       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1212 23:58:33.967191       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1212 23:58:34.122875       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:58:34.168106       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:58:34.170893       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 23:58:34.178756       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 23:58:34.187478       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 23:58:34.187827       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:58:34.185090       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 23:58:34.185104       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 23:58:34.188572       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:58:34.188617       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:58:34.188642       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:58:34.188665       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:58:34.186014       1 shared_informer.go:318] Caches are synced for configmaps
	E1212 23:58:34.224453       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 23:58:34.975662       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:58:35.572669       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:58:35.583355       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:58:35.622907       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:58:35.659023       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:58:35.666146       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:58:46.770071       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 23:58:46.770983       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [d00d0e32c8c663f5f5de60451635e5b36869ddfb257bb8e780346779230adf10] <==
	* 
	* 
	* ==> kube-controller-manager [93b1d43ce71f1fe97981104945bb7cfae5f8473934dafeb27fad8af562d64b56] <==
	* 
	* 
	* ==> kube-controller-manager [a71083b64960e98d2e51ee2354a989a374466df0f0a6df79d4455accba113cd3] <==
	* I1212 23:58:46.783369       1 shared_informer.go:318] Caches are synced for TTL
	I1212 23:58:46.785745       1 shared_informer.go:318] Caches are synced for expand
	I1212 23:58:46.788572       1 shared_informer.go:318] Caches are synced for persistent volume
	I1212 23:58:46.792470       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1212 23:58:46.792565       1 shared_informer.go:318] Caches are synced for attach detach
	I1212 23:58:46.795384       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1212 23:58:46.796460       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1212 23:58:46.796825       1 shared_informer.go:318] Caches are synced for deployment
	I1212 23:58:46.797748       1 shared_informer.go:318] Caches are synced for stateful set
	I1212 23:58:46.797817       1 shared_informer.go:318] Caches are synced for HPA
	I1212 23:58:46.797834       1 shared_informer.go:318] Caches are synced for disruption
	I1212 23:58:46.801697       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1212 23:58:46.834657       1 shared_informer.go:318] Caches are synced for taint
	I1212 23:58:46.834916       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1212 23:58:46.835182       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-042245"
	I1212 23:58:46.835356       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1212 23:58:46.835396       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1212 23:58:46.835419       1 taint_manager.go:210] "Sending events to api server"
	I1212 23:58:46.835824       1 event.go:307] "Event occurred" object="pause-042245" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-042245 event: Registered Node pause-042245 in Controller"
	I1212 23:58:46.877326       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:58:46.895187       1 shared_informer.go:318] Caches are synced for daemon sets
	I1212 23:58:46.907963       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:58:47.325909       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 23:58:47.326017       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 23:58:47.332665       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [8a27bdab34c1c629010c827d9df542c9be0d2d0297b2e03f2337be318f7cc6de] <==
	* I1212 23:58:34.787708       1 server_others.go:69] "Using iptables proxy"
	I1212 23:58:34.797093       1 node.go:141] Successfully retrieved node IP: 192.168.50.180
	I1212 23:58:34.838043       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:58:34.838098       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:58:34.842530       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:58:34.842605       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:58:34.842801       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:58:34.842810       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:58:34.844051       1 config.go:315] "Starting node config controller"
	I1212 23:58:34.844154       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:58:34.844186       1 config.go:188] "Starting service config controller"
	I1212 23:58:34.844207       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:58:34.844237       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:58:34.844252       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:58:34.944508       1 shared_informer.go:318] Caches are synced for node config
	I1212 23:58:34.944560       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:58:34.944634       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [caf31ac96f345cc97036ac206292af5a9c1b99b1f603dc555e52a0d8c4792b3a] <==
	* I1212 23:58:13.503044       1 server_others.go:69] "Using iptables proxy"
	E1212 23:58:13.505981       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-042245": dial tcp 192.168.50.180:8443: connect: connection refused
	E1212 23:58:14.577541       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-042245": dial tcp 192.168.50.180:8443: connect: connection refused
	E1212 23:58:16.641843       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-042245": dial tcp 192.168.50.180:8443: connect: connection refused
	E1212 23:58:20.932429       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-042245": dial tcp 192.168.50.180:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [465e43015dc4511b642dfd09115f386f9aef0a975b7d501e2cc929e53309e2c4] <==
	* 
	* 
	* ==> kube-scheduler [57d0e1fdef57ae16df2e2939e564f3ab50007a8607d63c1318cd92380d94b707] <==
	* I1212 23:58:31.438558       1 serving.go:348] Generated self-signed cert in-memory
	I1212 23:58:34.160125       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 23:58:34.160208       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:58:34.164407       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1212 23:58:34.164476       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1212 23:58:34.164529       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 23:58:34.164553       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:58:34.164579       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1212 23:58:34.164600       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1212 23:58:34.165391       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 23:58:34.165473       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 23:58:34.266606       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1212 23:58:34.266678       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:58:34.266661       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:56:12 UTC, ends at Tue 2023-12-12 23:58:54 UTC. --
	Dec 12 23:58:30 pause-042245 kubelet[3333]: E1212 23:58:30.096569    3333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: W1212 23:58:30.368153    3333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: E1212 23:58:30.368234    3333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: W1212 23:58:30.375827    3333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-042245&limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: E1212 23:58:30.375875    3333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-042245&limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: E1212 23:58:30.619038    3333 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-042245?timeout=10s\": dial tcp 192.168.50.180:8443: connect: connection refused" interval="1.6s"
	Dec 12 23:58:30 pause-042245 kubelet[3333]: W1212 23:58:30.661945    3333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: E1212 23:58:30.662001    3333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: I1212 23:58:30.722788    3333 kubelet_node_status.go:70] "Attempting to register node" node="pause-042245"
	Dec 12 23:58:30 pause-042245 kubelet[3333]: E1212 23:58:30.723367    3333 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.180:8443: connect: connection refused" node="pause-042245"
	Dec 12 23:58:32 pause-042245 kubelet[3333]: I1212 23:58:32.325475    3333 kubelet_node_status.go:70] "Attempting to register node" node="pause-042245"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.193684    3333 kubelet_node_status.go:108] "Node was previously registered" node="pause-042245"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.193765    3333 kubelet_node_status.go:73] "Successfully registered node" node="pause-042245"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.199902    3333 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.201321    3333 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.207845    3333 apiserver.go:52] "Watching apiserver"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.210728    3333 topology_manager.go:215] "Topology Admit Handler" podUID="0d98d4a6-2802-42de-b6b2-af501fe02612" podNamespace="kube-system" podName="kube-proxy-nk6dp"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.210844    3333 topology_manager.go:215] "Topology Admit Handler" podUID="0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c" podNamespace="kube-system" podName="coredns-5dd5756b68-6fff5"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.215852    3333 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.316489    3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d98d4a6-2802-42de-b6b2-af501fe02612-lib-modules\") pod \"kube-proxy-nk6dp\" (UID: \"0d98d4a6-2802-42de-b6b2-af501fe02612\") " pod="kube-system/kube-proxy-nk6dp"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.316535    3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d98d4a6-2802-42de-b6b2-af501fe02612-xtables-lock\") pod \"kube-proxy-nk6dp\" (UID: \"0d98d4a6-2802-42de-b6b2-af501fe02612\") " pod="kube-system/kube-proxy-nk6dp"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: E1212 23:58:34.453961    3333 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-042245\" already exists" pod="kube-system/kube-apiserver-pause-042245"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.512179    3333 scope.go:117] "RemoveContainer" containerID="31a00c45deaf0619238dcc9e59234b513812432a35ce65dffa76266028010dc2"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.512948    3333 scope.go:117] "RemoveContainer" containerID="caf31ac96f345cc97036ac206292af5a9c1b99b1f603dc555e52a0d8c4792b3a"
	Dec 12 23:58:42 pause-042245 kubelet[3333]: I1212 23:58:42.413908    3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-042245 -n pause-042245
helpers_test.go:261: (dbg) Run:  kubectl --context pause-042245 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-042245 -n pause-042245
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-042245 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-042245 logs -n 25: (1.380509987s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-269833 sudo           | NoKubernetes-269833       | jenkins | v1.32.0 | 12 Dec 23 23:54 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-269833                | NoKubernetes-269833       | jenkins | v1.32.0 | 12 Dec 23 23:54 UTC | 12 Dec 23 23:54 UTC |
	| start   | -p NoKubernetes-269833                | NoKubernetes-269833       | jenkins | v1.32.0 | 12 Dec 23 23:54 UTC | 12 Dec 23 23:55 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-279020             | running-upgrade-279020    | jenkins | v1.32.0 | 12 Dec 23 23:54 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-527166 ssh cat     | force-systemd-flag-527166 | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:55 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-527166          | force-systemd-flag-527166 | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:55 UTC |
	| start   | -p cert-options-643716                | cert-options-643716       | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:56 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-269833 sudo           | NoKubernetes-269833       | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-269833                | NoKubernetes-269833       | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:55 UTC |
	| start   | -p pause-042245 --memory=2048         | pause-042245              | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:57 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-279020             | running-upgrade-279020    | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:55 UTC |
	| start   | -p kubernetes-upgrade-961264          | kubernetes-upgrade-961264 | jenkins | v1.32.0 | 12 Dec 23 23:55 UTC | 12 Dec 23 23:57 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-643716 ssh               | cert-options-643716       | jenkins | v1.32.0 | 12 Dec 23 23:56 UTC | 12 Dec 23 23:56 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-643716 -- sudo        | cert-options-643716       | jenkins | v1.32.0 | 12 Dec 23 23:56 UTC | 12 Dec 23 23:56 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-643716                | cert-options-643716       | jenkins | v1.32.0 | 12 Dec 23 23:56 UTC | 12 Dec 23 23:56 UTC |
	| stop    | -p kubernetes-upgrade-961264          | kubernetes-upgrade-961264 | jenkins | v1.32.0 | 12 Dec 23 23:57 UTC | 12 Dec 23 23:57 UTC |
	| start   | -p kubernetes-upgrade-961264          | kubernetes-upgrade-961264 | jenkins | v1.32.0 | 12 Dec 23 23:57 UTC | 12 Dec 23 23:58 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-042245                       | pause-042245              | jenkins | v1.32.0 | 12 Dec 23 23:57 UTC | 12 Dec 23 23:58 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-961264          | kubernetes-upgrade-961264 | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-961264          | kubernetes-upgrade-961264 | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-380248             | cert-expiration-380248    | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-961264          | kubernetes-upgrade-961264 | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p old-k8s-version-508612             | old-k8s-version-508612    | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-380248             | cert-expiration-380248    | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p no-preload-143586                  | no-preload-143586         | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC |                     |
	|         | --memory=2200 --alsologtostderr       |                           |         |         |                     |                     |
	|         | --wait=true --preload=false           |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:58:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:58:53.422855  174148 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:58:53.423065  174148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:58:53.423075  174148 out.go:309] Setting ErrFile to fd 2...
	I1212 23:58:53.423080  174148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:58:53.423254  174148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1212 23:58:53.423863  174148 out.go:303] Setting JSON to false
	I1212 23:58:53.424907  174148 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9682,"bootTime":1702415852,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:58:53.424970  174148 start.go:138] virtualization: kvm guest
	I1212 23:58:53.427198  174148 out.go:177] * [no-preload-143586] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:58:53.428514  174148 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 23:58:53.428523  174148 notify.go:220] Checking for updates...
	I1212 23:58:53.430025  174148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:58:53.431669  174148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:58:53.432867  174148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:58:53.434162  174148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:58:53.435460  174148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:58:53.437251  174148 config.go:182] Loaded profile config "old-k8s-version-508612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 23:58:53.437393  174148 config.go:182] Loaded profile config "pause-042245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:58:53.437466  174148 config.go:182] Loaded profile config "stopped-upgrade-884273": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1212 23:58:53.437553  174148 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:58:53.478176  174148 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 23:58:53.479510  174148 start.go:298] selected driver: kvm2
	I1212 23:58:53.479527  174148 start.go:902] validating driver "kvm2" against <nil>
	I1212 23:58:53.479538  174148 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:58:53.480580  174148 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:58:53.480722  174148 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:58:53.500291  174148 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:58:53.500337  174148 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 23:58:53.500606  174148 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:58:53.500685  174148 cni.go:84] Creating CNI manager for ""
	I1212 23:58:53.500702  174148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:58:53.500719  174148 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 23:58:53.500734  174148 start_flags.go:323] config:
	{Name:no-preload-143586 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-143586 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:58:53.500966  174148 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:58:53.502773  174148 out.go:177] * Starting control plane node no-preload-143586 in cluster no-preload-143586
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:56:12 UTC, ends at Tue 2023-12-12 23:58:56 UTC. --
	Dec 12 23:58:55 pause-042245 crio[2509]: time="2023-12-12 23:58:55.938881381Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702425535938869561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=ad4d482b-fe0c-4459-af79-b520d8dcdbfa name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:55 pause-042245 crio[2509]: time="2023-12-12 23:58:55.939857663Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a9ea16eb-56cb-4d7c-9519-d41964a695b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:55 pause-042245 crio[2509]: time="2023-12-12 23:58:55.939905992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a9ea16eb-56cb-4d7c-9519-d41964a695b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:55 pause-042245 crio[2509]: time="2023-12-12 23:58:55.940653569Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a27bdab34c1c629010c827d9df542c9be0d2d0297b2e03f2337be318f7cc6de,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702425514577512980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a732f0e328bf909dfa49503a2addefec8eb416db8eef10b7fa450e247ea45fd,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702425514554891660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d0e1fdef57ae16df2e2939e564f3ab50007a8607d63c1318cd92380d94b707,PodSandboxId:4ed5ed97859efd418e5556ef55f9522f69b1dafa2983654d05e0253198dd3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702425509947123121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b96259
2fade2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9c90b399b005633bddde8d6b7cea7173c430de34757f66b14ba1e265e2c642,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702425509900677371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 1bd6e647,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d9aa8939e48d5654fb6bcc6fc82ff15e13fb4003570c493118b3771bb0ecf80,PodSandboxId:70626acd90d13cf121a18a7ca71721a022a059bf958bc6d8a055ec90e865f095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702425509916916038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.
container.hash: f05701c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a71083b64960e98d2e51ee2354a989a374466df0f0a6df79d4455accba113cd3,PodSandboxId:8031b02c8351e2485bdeb595f492719fb4d195ea926e2366ace65d5fde158d87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702425506940903618,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b43bd6ba4ffe03fd75fc55d07c3f4dbc2a445441a1dd96b4dcb7496f6090a2,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1702425500996829568,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd6e647,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf31ac96f345cc97036ac206292af5a9c1b99b1f603dc555e52a0d8c4792b3a,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1702425493252974760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a00c45deaf0619238dcc9e59234b513812432a35ce65dffa76266028010dc2,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1702425493218682553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465e43015dc4511b642dfd09115f386f9aef0a975b7d501e2cc929e53309e2c4,PodSandboxId:e9fe18a2c8c9073a66f63e48cb62fe8663704af310ebe90607d2a7a02ee84976,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1702425488562610142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b962592fa
de2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b1d43ce71f1fe97981104945bb7cfae5f8473934dafeb27fad8af562d64b56,PodSandboxId:44e9150340683c0a25257706e5b602d733f385dc459c01a6714a892d840ee5c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1702425487744790198,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00d0e32c8c663f5f5de60451635e5b36869ddfb257bb8e780346779230adf10,PodSandboxId:27c183a6175b643d3e50205b5ef6584179d242ad935a531db3ab96389752331d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1702425487415222608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.container.hash
: f05701c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a9ea16eb-56cb-4d7c-9519-d41964a695b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:55 pause-042245 crio[2509]: time="2023-12-12 23:58:55.986525248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1a23c977-2515-4077-b445-9ef1c345bef9 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:58:55 pause-042245 crio[2509]: time="2023-12-12 23:58:55.986612573Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1a23c977-2515-4077-b445-9ef1c345bef9 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:58:55 pause-042245 crio[2509]: time="2023-12-12 23:58:55.987654975Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6961c091-3242-4b33-9155-96a404e54576 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:55 pause-042245 crio[2509]: time="2023-12-12 23:58:55.987981432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702425535987966125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=6961c091-3242-4b33-9155-96a404e54576 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:55 pause-042245 crio[2509]: time="2023-12-12 23:58:55.989089827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0ecca327-9ee0-4422-8cdd-3ef490f63f2c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:55 pause-042245 crio[2509]: time="2023-12-12 23:58:55.989138153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0ecca327-9ee0-4422-8cdd-3ef490f63f2c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:55 pause-042245 crio[2509]: time="2023-12-12 23:58:55.989522155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a27bdab34c1c629010c827d9df542c9be0d2d0297b2e03f2337be318f7cc6de,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702425514577512980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a732f0e328bf909dfa49503a2addefec8eb416db8eef10b7fa450e247ea45fd,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702425514554891660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d0e1fdef57ae16df2e2939e564f3ab50007a8607d63c1318cd92380d94b707,PodSandboxId:4ed5ed97859efd418e5556ef55f9522f69b1dafa2983654d05e0253198dd3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702425509947123121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b96259
2fade2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9c90b399b005633bddde8d6b7cea7173c430de34757f66b14ba1e265e2c642,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702425509900677371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 1bd6e647,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d9aa8939e48d5654fb6bcc6fc82ff15e13fb4003570c493118b3771bb0ecf80,PodSandboxId:70626acd90d13cf121a18a7ca71721a022a059bf958bc6d8a055ec90e865f095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702425509916916038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.
container.hash: f05701c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a71083b64960e98d2e51ee2354a989a374466df0f0a6df79d4455accba113cd3,PodSandboxId:8031b02c8351e2485bdeb595f492719fb4d195ea926e2366ace65d5fde158d87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702425506940903618,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b43bd6ba4ffe03fd75fc55d07c3f4dbc2a445441a1dd96b4dcb7496f6090a2,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1702425500996829568,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd6e647,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf31ac96f345cc97036ac206292af5a9c1b99b1f603dc555e52a0d8c4792b3a,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1702425493252974760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a00c45deaf0619238dcc9e59234b513812432a35ce65dffa76266028010dc2,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1702425493218682553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465e43015dc4511b642dfd09115f386f9aef0a975b7d501e2cc929e53309e2c4,PodSandboxId:e9fe18a2c8c9073a66f63e48cb62fe8663704af310ebe90607d2a7a02ee84976,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1702425488562610142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b962592fa
de2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b1d43ce71f1fe97981104945bb7cfae5f8473934dafeb27fad8af562d64b56,PodSandboxId:44e9150340683c0a25257706e5b602d733f385dc459c01a6714a892d840ee5c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1702425487744790198,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00d0e32c8c663f5f5de60451635e5b36869ddfb257bb8e780346779230adf10,PodSandboxId:27c183a6175b643d3e50205b5ef6584179d242ad935a531db3ab96389752331d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1702425487415222608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.container.hash
: f05701c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0ecca327-9ee0-4422-8cdd-3ef490f63f2c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.037747540Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a66cfb8d-6b77-40fb-a561-673ab3b9c4f3 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.037829947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a66cfb8d-6b77-40fb-a561-673ab3b9c4f3 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.039157313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b9d01308-d994-47b5-b5ab-cdaead777453 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.039578104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702425536039561731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=b9d01308-d994-47b5-b5ab-cdaead777453 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.040342939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ba367264-b86a-40f8-b605-105373edb534 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.040416680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ba367264-b86a-40f8-b605-105373edb534 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.040660847Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a27bdab34c1c629010c827d9df542c9be0d2d0297b2e03f2337be318f7cc6de,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702425514577512980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a732f0e328bf909dfa49503a2addefec8eb416db8eef10b7fa450e247ea45fd,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702425514554891660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d0e1fdef57ae16df2e2939e564f3ab50007a8607d63c1318cd92380d94b707,PodSandboxId:4ed5ed97859efd418e5556ef55f9522f69b1dafa2983654d05e0253198dd3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702425509947123121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b96259
2fade2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9c90b399b005633bddde8d6b7cea7173c430de34757f66b14ba1e265e2c642,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702425509900677371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 1bd6e647,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d9aa8939e48d5654fb6bcc6fc82ff15e13fb4003570c493118b3771bb0ecf80,PodSandboxId:70626acd90d13cf121a18a7ca71721a022a059bf958bc6d8a055ec90e865f095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702425509916916038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.
container.hash: f05701c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a71083b64960e98d2e51ee2354a989a374466df0f0a6df79d4455accba113cd3,PodSandboxId:8031b02c8351e2485bdeb595f492719fb4d195ea926e2366ace65d5fde158d87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702425506940903618,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b43bd6ba4ffe03fd75fc55d07c3f4dbc2a445441a1dd96b4dcb7496f6090a2,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1702425500996829568,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd6e647,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf31ac96f345cc97036ac206292af5a9c1b99b1f603dc555e52a0d8c4792b3a,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1702425493252974760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a00c45deaf0619238dcc9e59234b513812432a35ce65dffa76266028010dc2,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1702425493218682553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465e43015dc4511b642dfd09115f386f9aef0a975b7d501e2cc929e53309e2c4,PodSandboxId:e9fe18a2c8c9073a66f63e48cb62fe8663704af310ebe90607d2a7a02ee84976,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1702425488562610142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b962592fa
de2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b1d43ce71f1fe97981104945bb7cfae5f8473934dafeb27fad8af562d64b56,PodSandboxId:44e9150340683c0a25257706e5b602d733f385dc459c01a6714a892d840ee5c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1702425487744790198,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00d0e32c8c663f5f5de60451635e5b36869ddfb257bb8e780346779230adf10,PodSandboxId:27c183a6175b643d3e50205b5ef6584179d242ad935a531db3ab96389752331d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1702425487415222608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.container.hash
: f05701c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ba367264-b86a-40f8-b605-105373edb534 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.086874655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2b79b22a-ec06-4f87-948f-1b65d765aaad name=/runtime.v1.RuntimeService/Version
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.086960651Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2b79b22a-ec06-4f87-948f-1b65d765aaad name=/runtime.v1.RuntimeService/Version
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.088804790Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=62e5f5ec-336c-4f5c-8219-2658bcacb9a7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.089227771Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702425536089210674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=62e5f5ec-336c-4f5c-8219-2658bcacb9a7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.090673360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1f1a26d1-cbde-458d-8ef1-7ed7ac29f57e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.090758243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1f1a26d1-cbde-458d-8ef1-7ed7ac29f57e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:58:56 pause-042245 crio[2509]: time="2023-12-12 23:58:56.091085015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a27bdab34c1c629010c827d9df542c9be0d2d0297b2e03f2337be318f7cc6de,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702425514577512980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a732f0e328bf909dfa49503a2addefec8eb416db8eef10b7fa450e247ea45fd,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702425514554891660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d0e1fdef57ae16df2e2939e564f3ab50007a8607d63c1318cd92380d94b707,PodSandboxId:4ed5ed97859efd418e5556ef55f9522f69b1dafa2983654d05e0253198dd3522,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702425509947123121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b96259
2fade2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9c90b399b005633bddde8d6b7cea7173c430de34757f66b14ba1e265e2c642,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702425509900677371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 1bd6e647,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d9aa8939e48d5654fb6bcc6fc82ff15e13fb4003570c493118b3771bb0ecf80,PodSandboxId:70626acd90d13cf121a18a7ca71721a022a059bf958bc6d8a055ec90e865f095,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702425509916916038,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.
container.hash: f05701c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a71083b64960e98d2e51ee2354a989a374466df0f0a6df79d4455accba113cd3,PodSandboxId:8031b02c8351e2485bdeb595f492719fb4d195ea926e2366ace65d5fde158d87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702425506940903618,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b43bd6ba4ffe03fd75fc55d07c3f4dbc2a445441a1dd96b4dcb7496f6090a2,PodSandboxId:c3ce3c7ac7cbdd5ed06db1f4f61d01536a50939ce87ffa1c5566a689fef38f6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1702425500996829568,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e16d1f1c68181863746218206d7c481,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd6e647,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf31ac96f345cc97036ac206292af5a9c1b99b1f603dc555e52a0d8c4792b3a,PodSandboxId:d1f49631489192b69d5b0a33ea64660f4eab3b19099183e0dc73c884aa18c209,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1702425493252974760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nk6dp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d98d4a6-2802-42de-b6b2-af501fe02612,},Annotations:map[string]string{io.kubernetes.container.hash: 54420ab9,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a00c45deaf0619238dcc9e59234b513812432a35ce65dffa76266028010dc2,PodSandboxId:bd7d60c4dcf55f14f0ba9112d9e7e9fa9b8f8852687982ce39110abed87b2267,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1702425493218682553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fff5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c,},Annotations:map[string]string{io.kubernetes.container.hash: f04fae55,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465e43015dc4511b642dfd09115f386f9aef0a975b7d501e2cc929e53309e2c4,PodSandboxId:e9fe18a2c8c9073a66f63e48cb62fe8663704af310ebe90607d2a7a02ee84976,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1702425488562610142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b962592fa
de2d69473f12cf5580a107,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93b1d43ce71f1fe97981104945bb7cfae5f8473934dafeb27fad8af562d64b56,PodSandboxId:44e9150340683c0a25257706e5b602d733f385dc459c01a6714a892d840ee5c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1702425487744790198,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b3024ee8f5077c90fd75626e660cd8,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00d0e32c8c663f5f5de60451635e5b36869ddfb257bb8e780346779230adf10,PodSandboxId:27c183a6175b643d3e50205b5ef6584179d242ad935a531db3ab96389752331d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1702425487415222608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-042245,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fcf8a7877d2f8ae370eeb4a4b9cab8c,},Annotations:map[string]string{io.kubernetes.container.hash
: f05701c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1f1a26d1-cbde-458d-8ef1-7ed7ac29f57e name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8a27bdab34c1c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   21 seconds ago      Running             kube-proxy                2                   d1f4963148919       kube-proxy-nk6dp
	9a732f0e328bf       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   21 seconds ago      Running             coredns                   2                   bd7d60c4dcf55       coredns-5dd5756b68-6fff5
	57d0e1fdef57a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   26 seconds ago      Running             kube-scheduler            2                   4ed5ed97859ef       kube-scheduler-pause-042245
	5d9aa8939e48d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   26 seconds ago      Running             kube-apiserver            2                   70626acd90d13       kube-apiserver-pause-042245
	9a9c90b399b00       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   26 seconds ago      Running             etcd                      3                   c3ce3c7ac7cbd       etcd-pause-042245
	a71083b64960e       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   29 seconds ago      Running             kube-controller-manager   2                   8031b02c8351e       kube-controller-manager-pause-042245
	d4b43bd6ba4ff       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   35 seconds ago      Exited              etcd                      2                   c3ce3c7ac7cbd       etcd-pause-042245
	caf31ac96f345       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   42 seconds ago      Exited              kube-proxy                1                   d1f4963148919       kube-proxy-nk6dp
	31a00c45deaf0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   42 seconds ago      Exited              coredns                   1                   bd7d60c4dcf55       coredns-5dd5756b68-6fff5
	465e43015dc45       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   47 seconds ago      Exited              kube-scheduler            1                   e9fe18a2c8c90       kube-scheduler-pause-042245
	93b1d43ce71f1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   48 seconds ago      Exited              kube-controller-manager   1                   44e9150340683       kube-controller-manager-pause-042245
	d00d0e32c8c66       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   48 seconds ago      Exited              kube-apiserver            1                   27c183a6175b6       kube-apiserver-pause-042245
	
	* 
	* ==> coredns [31a00c45deaf0619238dcc9e59234b513812432a35ce65dffa76266028010dc2] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56651 - 46325 "HINFO IN 2724357454419950740.2190635231988653957. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015345135s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [9a732f0e328bf909dfa49503a2addefec8eb416db8eef10b7fa450e247ea45fd] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42822 - 43644 "HINFO IN 3822534135260605557.6236460414960743442. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013738654s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-042245
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-042245
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=pause-042245
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_56_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:56:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-042245
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:58:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:58:34 +0000   Tue, 12 Dec 2023 23:56:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:58:34 +0000   Tue, 12 Dec 2023 23:56:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:58:34 +0000   Tue, 12 Dec 2023 23:56:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:58:34 +0000   Tue, 12 Dec 2023 23:56:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.180
	  Hostname:    pause-042245
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e38736152684fb08ad1e1a95efed320
	  System UUID:                0e387361-5268-4fb0-8ad1-e1a95efed320
	  Boot ID:                    d92c0287-811d-4e19-a513-66dbc8d5e161
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-6fff5                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     116s
	  kube-system                 etcd-pause-042245                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-apiserver-pause-042245             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-controller-manager-pause-042245    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-proxy-nk6dp                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-scheduler-pause-042245             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 2m11s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m11s              kubelet          Node pause-042245 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s              kubelet          Node pause-042245 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s              kubelet          Node pause-042245 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m10s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m10s              kubelet          Node pause-042245 status is now: NodeReady
	  Normal  RegisteredNode           119s               node-controller  Node pause-042245 event: Registered Node pause-042245 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-042245 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-042245 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-042245 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-042245 event: Registered Node pause-042245 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec12 23:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070375] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.668648] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.738290] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.159906] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.130540] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.973942] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.117412] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.146193] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.138944] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.280855] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +10.892627] systemd-fstab-generator[918]: Ignoring "noauto" for root device
	[  +9.302919] systemd-fstab-generator[1254]: Ignoring "noauto" for root device
	[Dec12 23:57] kauditd_printk_skb: 19 callbacks suppressed
	[Dec12 23:58] systemd-fstab-generator[2235]: Ignoring "noauto" for root device
	[  +0.275697] systemd-fstab-generator[2275]: Ignoring "noauto" for root device
	[  +0.332221] systemd-fstab-generator[2307]: Ignoring "noauto" for root device
	[  +0.363065] systemd-fstab-generator[2347]: Ignoring "noauto" for root device
	[  +0.459712] systemd-fstab-generator[2398]: Ignoring "noauto" for root device
	[ +19.972601] systemd-fstab-generator[3327]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [9a9c90b399b005633bddde8d6b7cea7173c430de34757f66b14ba1e265e2c642] <==
	* {"level":"info","ts":"2023-12-12T23:58:30.912905Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:58:30.912944Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:58:30.913219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 switched to configuration voters=(3934897292032928695)"}
	{"level":"info","ts":"2023-12-12T23:58:30.913585Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"81b4a4bc8c2c313","local-member-id":"369b903d3744ebb7","added-peer-id":"369b903d3744ebb7","added-peer-peer-urls":["https://192.168.50.180:2380"]}
	{"level":"info","ts":"2023-12-12T23:58:30.913794Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"81b4a4bc8c2c313","local-member-id":"369b903d3744ebb7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:58:30.913889Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:58:30.919076Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T23:58:30.919209Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.180:2380"}
	{"level":"info","ts":"2023-12-12T23:58:30.919395Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.180:2380"}
	{"level":"info","ts":"2023-12-12T23:58:30.920756Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T23:58:30.920685Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"369b903d3744ebb7","initial-advertise-peer-urls":["https://192.168.50.180:2380"],"listen-peer-urls":["https://192.168.50.180:2380"],"advertise-client-urls":["https://192.168.50.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T23:58:32.595838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:32.595944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:32.595981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 received MsgPreVoteResp from 369b903d3744ebb7 at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:32.59601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 became candidate at term 4"}
	{"level":"info","ts":"2023-12-12T23:58:32.596034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 received MsgVoteResp from 369b903d3744ebb7 at term 4"}
	{"level":"info","ts":"2023-12-12T23:58:32.596061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 became leader at term 4"}
	{"level":"info","ts":"2023-12-12T23:58:32.596091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 369b903d3744ebb7 elected leader 369b903d3744ebb7 at term 4"}
	{"level":"info","ts":"2023-12-12T23:58:32.5979Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"369b903d3744ebb7","local-member-attributes":"{Name:pause-042245 ClientURLs:[https://192.168.50.180:2379]}","request-path":"/0/members/369b903d3744ebb7/attributes","cluster-id":"81b4a4bc8c2c313","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:58:32.598219Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:58:32.598261Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:58:32.598364Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:58:32.598507Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:58:32.599588Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.180:2379"}
	{"level":"info","ts":"2023-12-12T23:58:32.59961Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [d4b43bd6ba4ffe03fd75fc55d07c3f4dbc2a445441a1dd96b4dcb7496f6090a2] <==
	* {"level":"info","ts":"2023-12-12T23:58:21.670511Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.180:2380"}
	{"level":"info","ts":"2023-12-12T23:58:22.050859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T23:58:22.050938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:58:22.050975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 received MsgPreVoteResp from 369b903d3744ebb7 at term 2"}
	{"level":"info","ts":"2023-12-12T23:58:22.051003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:22.051011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 received MsgVoteResp from 369b903d3744ebb7 at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:22.051023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"369b903d3744ebb7 became leader at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:22.051066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 369b903d3744ebb7 elected leader 369b903d3744ebb7 at term 3"}
	{"level":"info","ts":"2023-12-12T23:58:22.056552Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"369b903d3744ebb7","local-member-attributes":"{Name:pause-042245 ClientURLs:[https://192.168.50.180:2379]}","request-path":"/0/members/369b903d3744ebb7/attributes","cluster-id":"81b4a4bc8c2c313","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:58:22.056563Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:58:22.056698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:58:22.057887Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:58:22.058256Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.180:2379"}
	{"level":"info","ts":"2023-12-12T23:58:22.058745Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:58:22.058792Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:58:22.373093Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-12T23:58:22.373221Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-042245","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.180:2380"],"advertise-client-urls":["https://192.168.50.180:2379"]}
	{"level":"warn","ts":"2023-12-12T23:58:22.373425Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T23:58:22.373481Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T23:58:22.375198Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.180:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T23:58:22.37526Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.180:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-12T23:58:22.375476Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"369b903d3744ebb7","current-leader-member-id":"369b903d3744ebb7"}
	{"level":"info","ts":"2023-12-12T23:58:22.379409Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.180:2380"}
	{"level":"info","ts":"2023-12-12T23:58:22.379566Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.180:2380"}
	{"level":"info","ts":"2023-12-12T23:58:22.37962Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-042245","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.180:2380"],"advertise-client-urls":["https://192.168.50.180:2379"]}
	
	* 
	* ==> kernel <==
	*  23:58:56 up 2 min,  0 users,  load average: 2.86, 1.22, 0.46
	Linux pause-042245 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5d9aa8939e48d5654fb6bcc6fc82ff15e13fb4003570c493118b3771bb0ecf80] <==
	* I1212 23:58:33.967404       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I1212 23:58:33.969112       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1212 23:58:33.967191       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1212 23:58:34.122875       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 23:58:34.168106       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 23:58:34.170893       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 23:58:34.178756       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 23:58:34.187478       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 23:58:34.187827       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 23:58:34.185090       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 23:58:34.185104       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 23:58:34.188572       1 aggregator.go:166] initial CRD sync complete...
	I1212 23:58:34.188617       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 23:58:34.188642       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 23:58:34.188665       1 cache.go:39] Caches are synced for autoregister controller
	I1212 23:58:34.186014       1 shared_informer.go:318] Caches are synced for configmaps
	E1212 23:58:34.224453       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 23:58:34.975662       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 23:58:35.572669       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 23:58:35.583355       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 23:58:35.622907       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 23:58:35.659023       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 23:58:35.666146       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 23:58:46.770071       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 23:58:46.770983       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [d00d0e32c8c663f5f5de60451635e5b36869ddfb257bb8e780346779230adf10] <==
	* 
	* 
	* ==> kube-controller-manager [93b1d43ce71f1fe97981104945bb7cfae5f8473934dafeb27fad8af562d64b56] <==
	* 
	* 
	* ==> kube-controller-manager [a71083b64960e98d2e51ee2354a989a374466df0f0a6df79d4455accba113cd3] <==
	* I1212 23:58:46.783369       1 shared_informer.go:318] Caches are synced for TTL
	I1212 23:58:46.785745       1 shared_informer.go:318] Caches are synced for expand
	I1212 23:58:46.788572       1 shared_informer.go:318] Caches are synced for persistent volume
	I1212 23:58:46.792470       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1212 23:58:46.792565       1 shared_informer.go:318] Caches are synced for attach detach
	I1212 23:58:46.795384       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1212 23:58:46.796460       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1212 23:58:46.796825       1 shared_informer.go:318] Caches are synced for deployment
	I1212 23:58:46.797748       1 shared_informer.go:318] Caches are synced for stateful set
	I1212 23:58:46.797817       1 shared_informer.go:318] Caches are synced for HPA
	I1212 23:58:46.797834       1 shared_informer.go:318] Caches are synced for disruption
	I1212 23:58:46.801697       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1212 23:58:46.834657       1 shared_informer.go:318] Caches are synced for taint
	I1212 23:58:46.834916       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1212 23:58:46.835182       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-042245"
	I1212 23:58:46.835356       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1212 23:58:46.835396       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1212 23:58:46.835419       1 taint_manager.go:210] "Sending events to api server"
	I1212 23:58:46.835824       1 event.go:307] "Event occurred" object="pause-042245" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-042245 event: Registered Node pause-042245 in Controller"
	I1212 23:58:46.877326       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:58:46.895187       1 shared_informer.go:318] Caches are synced for daemon sets
	I1212 23:58:46.907963       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 23:58:47.325909       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 23:58:47.326017       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 23:58:47.332665       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [8a27bdab34c1c629010c827d9df542c9be0d2d0297b2e03f2337be318f7cc6de] <==
	* I1212 23:58:34.787708       1 server_others.go:69] "Using iptables proxy"
	I1212 23:58:34.797093       1 node.go:141] Successfully retrieved node IP: 192.168.50.180
	I1212 23:58:34.838043       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:58:34.838098       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:58:34.842530       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:58:34.842605       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:58:34.842801       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:58:34.842810       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:58:34.844051       1 config.go:315] "Starting node config controller"
	I1212 23:58:34.844154       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:58:34.844186       1 config.go:188] "Starting service config controller"
	I1212 23:58:34.844207       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:58:34.844237       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:58:34.844252       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:58:34.944508       1 shared_informer.go:318] Caches are synced for node config
	I1212 23:58:34.944560       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:58:34.944634       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [caf31ac96f345cc97036ac206292af5a9c1b99b1f603dc555e52a0d8c4792b3a] <==
	* I1212 23:58:13.503044       1 server_others.go:69] "Using iptables proxy"
	E1212 23:58:13.505981       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-042245": dial tcp 192.168.50.180:8443: connect: connection refused
	E1212 23:58:14.577541       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-042245": dial tcp 192.168.50.180:8443: connect: connection refused
	E1212 23:58:16.641843       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-042245": dial tcp 192.168.50.180:8443: connect: connection refused
	E1212 23:58:20.932429       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-042245": dial tcp 192.168.50.180:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [465e43015dc4511b642dfd09115f386f9aef0a975b7d501e2cc929e53309e2c4] <==
	* 
	* 
	* ==> kube-scheduler [57d0e1fdef57ae16df2e2939e564f3ab50007a8607d63c1318cd92380d94b707] <==
	* I1212 23:58:31.438558       1 serving.go:348] Generated self-signed cert in-memory
	I1212 23:58:34.160125       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 23:58:34.160208       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:58:34.164407       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1212 23:58:34.164476       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1212 23:58:34.164529       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 23:58:34.164553       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:58:34.164579       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1212 23:58:34.164600       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1212 23:58:34.165391       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 23:58:34.165473       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 23:58:34.266606       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1212 23:58:34.266678       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:58:34.266661       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:56:12 UTC, ends at Tue 2023-12-12 23:58:56 UTC. --
	Dec 12 23:58:30 pause-042245 kubelet[3333]: E1212 23:58:30.096569    3333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: W1212 23:58:30.368153    3333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: E1212 23:58:30.368234    3333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: W1212 23:58:30.375827    3333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-042245&limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: E1212 23:58:30.375875    3333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-042245&limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: E1212 23:58:30.619038    3333 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-042245?timeout=10s\": dial tcp 192.168.50.180:8443: connect: connection refused" interval="1.6s"
	Dec 12 23:58:30 pause-042245 kubelet[3333]: W1212 23:58:30.661945    3333 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: E1212 23:58:30.662001    3333 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.180:8443: connect: connection refused
	Dec 12 23:58:30 pause-042245 kubelet[3333]: I1212 23:58:30.722788    3333 kubelet_node_status.go:70] "Attempting to register node" node="pause-042245"
	Dec 12 23:58:30 pause-042245 kubelet[3333]: E1212 23:58:30.723367    3333 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.180:8443: connect: connection refused" node="pause-042245"
	Dec 12 23:58:32 pause-042245 kubelet[3333]: I1212 23:58:32.325475    3333 kubelet_node_status.go:70] "Attempting to register node" node="pause-042245"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.193684    3333 kubelet_node_status.go:108] "Node was previously registered" node="pause-042245"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.193765    3333 kubelet_node_status.go:73] "Successfully registered node" node="pause-042245"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.199902    3333 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.201321    3333 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.207845    3333 apiserver.go:52] "Watching apiserver"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.210728    3333 topology_manager.go:215] "Topology Admit Handler" podUID="0d98d4a6-2802-42de-b6b2-af501fe02612" podNamespace="kube-system" podName="kube-proxy-nk6dp"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.210844    3333 topology_manager.go:215] "Topology Admit Handler" podUID="0b7f28a1-26d7-43f5-b7a9-e9da6beb8c0c" podNamespace="kube-system" podName="coredns-5dd5756b68-6fff5"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.215852    3333 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.316489    3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d98d4a6-2802-42de-b6b2-af501fe02612-lib-modules\") pod \"kube-proxy-nk6dp\" (UID: \"0d98d4a6-2802-42de-b6b2-af501fe02612\") " pod="kube-system/kube-proxy-nk6dp"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.316535    3333 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d98d4a6-2802-42de-b6b2-af501fe02612-xtables-lock\") pod \"kube-proxy-nk6dp\" (UID: \"0d98d4a6-2802-42de-b6b2-af501fe02612\") " pod="kube-system/kube-proxy-nk6dp"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: E1212 23:58:34.453961    3333 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-042245\" already exists" pod="kube-system/kube-apiserver-pause-042245"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.512179    3333 scope.go:117] "RemoveContainer" containerID="31a00c45deaf0619238dcc9e59234b513812432a35ce65dffa76266028010dc2"
	Dec 12 23:58:34 pause-042245 kubelet[3333]: I1212 23:58:34.512948    3333 scope.go:117] "RemoveContainer" containerID="caf31ac96f345cc97036ac206292af5a9c1b99b1f603dc555e52a0d8c4792b3a"
	Dec 12 23:58:42 pause-042245 kubelet[3333]: I1212 23:58:42.413908    3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-042245 -n pause-042245
helpers_test.go:261: (dbg) Run:  kubectl --context pause-042245 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (72.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-508612 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-508612 --alsologtostderr -v=3: exit status 82 (2m1.693610966s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-508612"  ...
	* Stopping node "old-k8s-version-508612"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 00:01:03.071184  175798 out.go:296] Setting OutFile to fd 1 ...
	I1213 00:01:03.071360  175798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:01:03.071367  175798 out.go:309] Setting ErrFile to fd 2...
	I1213 00:01:03.071374  175798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:01:03.071670  175798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1213 00:01:03.071995  175798 out.go:303] Setting JSON to false
	I1213 00:01:03.072112  175798 mustload.go:65] Loading cluster: old-k8s-version-508612
	I1213 00:01:03.072636  175798 config.go:182] Loaded profile config "old-k8s-version-508612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1213 00:01:03.072730  175798 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/config.json ...
	I1213 00:01:03.072951  175798 mustload.go:65] Loading cluster: old-k8s-version-508612
	I1213 00:01:03.073119  175798 config.go:182] Loaded profile config "old-k8s-version-508612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1213 00:01:03.073157  175798 stop.go:39] StopHost: old-k8s-version-508612
	I1213 00:01:03.073717  175798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:01:03.073789  175798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:01:03.093599  175798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38997
	I1213 00:01:03.094348  175798 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:01:03.095171  175798 main.go:141] libmachine: Using API Version  1
	I1213 00:01:03.095200  175798 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:01:03.095611  175798 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:01:03.097033  175798 out.go:177] * Stopping node "old-k8s-version-508612"  ...
	I1213 00:01:03.098738  175798 main.go:141] libmachine: Stopping "old-k8s-version-508612"...
	I1213 00:01:03.098758  175798 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:01:03.101768  175798 main.go:141] libmachine: (old-k8s-version-508612) Calling .Stop
	I1213 00:01:03.106904  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 0/60
	I1213 00:01:04.108756  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 1/60
	I1213 00:01:05.111040  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 2/60
	I1213 00:01:06.112972  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 3/60
	I1213 00:01:07.114973  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 4/60
	I1213 00:01:08.116729  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 5/60
	I1213 00:01:09.118887  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 6/60
	I1213 00:01:10.120795  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 7/60
	I1213 00:01:11.122936  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 8/60
	I1213 00:01:12.125201  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 9/60
	I1213 00:01:13.127012  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 10/60
	I1213 00:01:14.128328  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 11/60
	I1213 00:01:15.129967  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 12/60
	I1213 00:01:16.131976  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 13/60
	I1213 00:01:17.133897  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 14/60
	I1213 00:01:18.136016  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 15/60
	I1213 00:01:19.138712  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 16/60
	I1213 00:01:20.140367  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 17/60
	I1213 00:01:21.142219  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 18/60
	I1213 00:01:22.143639  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 19/60
	I1213 00:01:23.145935  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 20/60
	I1213 00:01:24.147415  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 21/60
	I1213 00:01:25.149606  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 22/60
	I1213 00:01:26.150914  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 23/60
	I1213 00:01:27.152564  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 24/60
	I1213 00:01:28.154408  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 25/60
	I1213 00:01:29.156061  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 26/60
	I1213 00:01:30.157925  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 27/60
	I1213 00:01:31.159668  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 28/60
	I1213 00:01:32.161009  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 29/60
	I1213 00:01:33.162987  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 30/60
	I1213 00:01:34.164660  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 31/60
	I1213 00:01:35.166932  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 32/60
	I1213 00:01:36.168340  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 33/60
	I1213 00:01:37.169696  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 34/60
	I1213 00:01:38.171573  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 35/60
	I1213 00:01:39.173266  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 36/60
	I1213 00:01:40.174742  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 37/60
	I1213 00:01:41.176131  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 38/60
	I1213 00:01:42.177586  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 39/60
	I1213 00:01:43.179560  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 40/60
	I1213 00:01:44.181365  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 41/60
	I1213 00:01:45.182787  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 42/60
	I1213 00:01:46.184268  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 43/60
	I1213 00:01:47.185691  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 44/60
	I1213 00:01:48.187526  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 45/60
	I1213 00:01:49.189370  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 46/60
	I1213 00:01:50.192034  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 47/60
	I1213 00:01:51.193721  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 48/60
	I1213 00:01:52.195355  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 49/60
	I1213 00:01:53.198018  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 50/60
	I1213 00:01:54.199745  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 51/60
	I1213 00:01:55.202110  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 52/60
	I1213 00:01:56.203394  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 53/60
	I1213 00:01:57.205074  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 54/60
	I1213 00:01:58.207105  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 55/60
	I1213 00:01:59.208538  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 56/60
	I1213 00:02:00.210225  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 57/60
	I1213 00:02:01.212176  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 58/60
	I1213 00:02:02.213640  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 59/60
	I1213 00:02:03.215133  175798 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1213 00:02:03.215199  175798 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1213 00:02:03.215218  175798 retry.go:31] will retry after 1.345689529s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1213 00:02:04.561686  175798 stop.go:39] StopHost: old-k8s-version-508612
	I1213 00:02:04.562098  175798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:02:04.562147  175798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:02:04.576781  175798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43541
	I1213 00:02:04.577304  175798 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:02:04.577789  175798 main.go:141] libmachine: Using API Version  1
	I1213 00:02:04.577808  175798 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:02:04.578125  175798 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:02:04.580275  175798 out.go:177] * Stopping node "old-k8s-version-508612"  ...
	I1213 00:02:04.581641  175798 main.go:141] libmachine: Stopping "old-k8s-version-508612"...
	I1213 00:02:04.581658  175798 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:02:04.583449  175798 main.go:141] libmachine: (old-k8s-version-508612) Calling .Stop
	I1213 00:02:04.586731  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 0/60
	I1213 00:02:05.589245  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 1/60
	I1213 00:02:06.590788  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 2/60
	I1213 00:02:07.592556  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 3/60
	I1213 00:02:08.594072  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 4/60
	I1213 00:02:09.595938  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 5/60
	I1213 00:02:10.597328  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 6/60
	I1213 00:02:11.598829  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 7/60
	I1213 00:02:12.600349  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 8/60
	I1213 00:02:13.601932  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 9/60
	I1213 00:02:14.603905  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 10/60
	I1213 00:02:15.605495  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 11/60
	I1213 00:02:16.606933  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 12/60
	I1213 00:02:17.608506  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 13/60
	I1213 00:02:18.609891  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 14/60
	I1213 00:02:19.611675  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 15/60
	I1213 00:02:20.613345  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 16/60
	I1213 00:02:21.614813  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 17/60
	I1213 00:02:22.616204  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 18/60
	I1213 00:02:23.617872  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 19/60
	I1213 00:02:24.619070  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 20/60
	I1213 00:02:25.620477  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 21/60
	I1213 00:02:26.621765  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 22/60
	I1213 00:02:27.623058  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 23/60
	I1213 00:02:28.624335  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 24/60
	I1213 00:02:29.626003  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 25/60
	I1213 00:02:30.627519  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 26/60
	I1213 00:02:31.628792  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 27/60
	I1213 00:02:32.630237  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 28/60
	I1213 00:02:33.631733  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 29/60
	I1213 00:02:34.633381  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 30/60
	I1213 00:02:35.634862  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 31/60
	I1213 00:02:36.636108  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 32/60
	I1213 00:02:37.637541  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 33/60
	I1213 00:02:38.638588  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 34/60
	I1213 00:02:39.640618  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 35/60
	I1213 00:02:40.642930  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 36/60
	I1213 00:02:41.644228  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 37/60
	I1213 00:02:42.645528  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 38/60
	I1213 00:02:43.646640  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 39/60
	I1213 00:02:44.648257  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 40/60
	I1213 00:02:45.649442  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 41/60
	I1213 00:02:46.650743  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 42/60
	I1213 00:02:47.651826  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 43/60
	I1213 00:02:48.653131  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 44/60
	I1213 00:02:49.654531  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 45/60
	I1213 00:02:50.655858  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 46/60
	I1213 00:02:51.657201  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 47/60
	I1213 00:02:52.658575  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 48/60
	I1213 00:02:53.659841  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 49/60
	I1213 00:02:54.661652  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 50/60
	I1213 00:02:55.663209  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 51/60
	I1213 00:02:56.664506  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 52/60
	I1213 00:02:57.665825  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 53/60
	I1213 00:02:58.667183  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 54/60
	I1213 00:02:59.668979  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 55/60
	I1213 00:03:00.670228  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 56/60
	I1213 00:03:01.671811  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 57/60
	I1213 00:03:02.673294  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 58/60
	I1213 00:03:03.674647  175798 main.go:141] libmachine: (old-k8s-version-508612) Waiting for machine to stop 59/60
	I1213 00:03:04.675561  175798 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1213 00:03:04.675615  175798 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1213 00:03:04.677652  175798 out.go:177] 
	W1213 00:03:04.679240  175798 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1213 00:03:04.679259  175798 out.go:239] * 
	* 
	W1213 00:03:04.681710  175798 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 00:03:04.683870  175798 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-508612 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-508612 -n old-k8s-version-508612
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-508612 -n old-k8s-version-508612: exit status 3 (18.647494689s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 00:03:23.332795  176591 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host
	E1213 00:03:23.332816  176591 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-508612" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-335807 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-335807 --alsologtostderr -v=3: exit status 82 (2m1.203998768s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-335807"  ...
	* Stopping node "embed-certs-335807"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 00:01:38.606167  176071 out.go:296] Setting OutFile to fd 1 ...
	I1213 00:01:38.606309  176071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:01:38.606319  176071 out.go:309] Setting ErrFile to fd 2...
	I1213 00:01:38.606324  176071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:01:38.606511  176071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1213 00:01:38.606779  176071 out.go:303] Setting JSON to false
	I1213 00:01:38.606886  176071 mustload.go:65] Loading cluster: embed-certs-335807
	I1213 00:01:38.607230  176071 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:01:38.607296  176071 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/config.json ...
	I1213 00:01:38.607462  176071 mustload.go:65] Loading cluster: embed-certs-335807
	I1213 00:01:38.607571  176071 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:01:38.607610  176071 stop.go:39] StopHost: embed-certs-335807
	I1213 00:01:38.608095  176071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:01:38.608146  176071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:01:38.623368  176071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44321
	I1213 00:01:38.623798  176071 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:01:38.624394  176071 main.go:141] libmachine: Using API Version  1
	I1213 00:01:38.624420  176071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:01:38.624825  176071 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:01:38.627945  176071 out.go:177] * Stopping node "embed-certs-335807"  ...
	I1213 00:01:38.629564  176071 main.go:141] libmachine: Stopping "embed-certs-335807"...
	I1213 00:01:38.629583  176071 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:01:38.631330  176071 main.go:141] libmachine: (embed-certs-335807) Calling .Stop
	I1213 00:01:38.634968  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 0/60
	I1213 00:01:39.636650  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 1/60
	I1213 00:01:40.639039  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 2/60
	I1213 00:01:41.640291  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 3/60
	I1213 00:01:42.642064  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 4/60
	I1213 00:01:43.644726  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 5/60
	I1213 00:01:44.646985  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 6/60
	I1213 00:01:45.648505  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 7/60
	I1213 00:01:46.649850  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 8/60
	I1213 00:01:47.651385  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 9/60
	I1213 00:01:48.653892  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 10/60
	I1213 00:01:49.655340  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 11/60
	I1213 00:01:50.657015  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 12/60
	I1213 00:01:51.659026  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 13/60
	I1213 00:01:52.660583  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 14/60
	I1213 00:01:53.662414  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 15/60
	I1213 00:01:54.665147  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 16/60
	I1213 00:01:55.667514  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 17/60
	I1213 00:01:56.669130  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 18/60
	I1213 00:01:57.671306  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 19/60
	I1213 00:01:58.673652  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 20/60
	I1213 00:01:59.675226  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 21/60
	I1213 00:02:00.676789  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 22/60
	I1213 00:02:01.678358  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 23/60
	I1213 00:02:02.679719  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 24/60
	I1213 00:02:03.681744  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 25/60
	I1213 00:02:04.683424  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 26/60
	I1213 00:02:05.684808  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 27/60
	I1213 00:02:06.686872  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 28/60
	I1213 00:02:07.688293  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 29/60
	I1213 00:02:08.690603  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 30/60
	I1213 00:02:09.691967  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 31/60
	I1213 00:02:10.693356  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 32/60
	I1213 00:02:11.694876  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 33/60
	I1213 00:02:12.696316  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 34/60
	I1213 00:02:13.698421  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 35/60
	I1213 00:02:14.699918  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 36/60
	I1213 00:02:15.701477  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 37/60
	I1213 00:02:16.702944  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 38/60
	I1213 00:02:17.704461  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 39/60
	I1213 00:02:18.706536  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 40/60
	I1213 00:02:19.708025  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 41/60
	I1213 00:02:20.709597  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 42/60
	I1213 00:02:21.711032  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 43/60
	I1213 00:02:22.712266  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 44/60
	I1213 00:02:23.714057  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 45/60
	I1213 00:02:24.715189  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 46/60
	I1213 00:02:25.716471  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 47/60
	I1213 00:02:26.717669  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 48/60
	I1213 00:02:27.718808  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 49/60
	I1213 00:02:28.721072  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 50/60
	I1213 00:02:29.722235  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 51/60
	I1213 00:02:30.723605  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 52/60
	I1213 00:02:31.724802  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 53/60
	I1213 00:02:32.727004  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 54/60
	I1213 00:02:33.729292  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 55/60
	I1213 00:02:34.730403  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 56/60
	I1213 00:02:35.731634  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 57/60
	I1213 00:02:36.732935  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 58/60
	I1213 00:02:37.734233  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 59/60
	I1213 00:02:38.735343  176071 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1213 00:02:38.735424  176071 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1213 00:02:38.735477  176071 retry.go:31] will retry after 900.447935ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1213 00:02:39.636577  176071 stop.go:39] StopHost: embed-certs-335807
	I1213 00:02:39.636986  176071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:02:39.637036  176071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:02:39.651649  176071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39827
	I1213 00:02:39.652079  176071 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:02:39.652552  176071 main.go:141] libmachine: Using API Version  1
	I1213 00:02:39.652574  176071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:02:39.652888  176071 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:02:39.654914  176071 out.go:177] * Stopping node "embed-certs-335807"  ...
	I1213 00:02:39.656241  176071 main.go:141] libmachine: Stopping "embed-certs-335807"...
	I1213 00:02:39.656259  176071 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:02:39.658040  176071 main.go:141] libmachine: (embed-certs-335807) Calling .Stop
	I1213 00:02:39.661110  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 0/60
	I1213 00:02:40.662399  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 1/60
	I1213 00:02:41.663430  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 2/60
	I1213 00:02:42.664508  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 3/60
	I1213 00:02:43.665526  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 4/60
	I1213 00:02:44.666886  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 5/60
	I1213 00:02:45.668139  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 6/60
	I1213 00:02:46.669140  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 7/60
	I1213 00:02:47.670255  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 8/60
	I1213 00:02:48.671248  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 9/60
	I1213 00:02:49.673038  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 10/60
	I1213 00:02:50.674238  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 11/60
	I1213 00:02:51.675377  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 12/60
	I1213 00:02:52.676613  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 13/60
	I1213 00:02:53.678549  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 14/60
	I1213 00:02:54.680149  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 15/60
	I1213 00:02:55.681499  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 16/60
	I1213 00:02:56.682655  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 17/60
	I1213 00:02:57.683678  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 18/60
	I1213 00:02:58.684872  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 19/60
	I1213 00:02:59.686646  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 20/60
	I1213 00:03:00.687777  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 21/60
	I1213 00:03:01.688973  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 22/60
	I1213 00:03:02.690557  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 23/60
	I1213 00:03:03.691636  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 24/60
	I1213 00:03:04.693560  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 25/60
	I1213 00:03:05.694873  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 26/60
	I1213 00:03:06.696495  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 27/60
	I1213 00:03:07.697895  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 28/60
	I1213 00:03:08.699410  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 29/60
	I1213 00:03:09.701367  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 30/60
	I1213 00:03:10.702649  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 31/60
	I1213 00:03:11.704134  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 32/60
	I1213 00:03:12.705437  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 33/60
	I1213 00:03:13.707001  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 34/60
	I1213 00:03:14.708830  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 35/60
	I1213 00:03:15.710490  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 36/60
	I1213 00:03:16.711855  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 37/60
	I1213 00:03:17.713376  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 38/60
	I1213 00:03:18.714673  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 39/60
	I1213 00:03:19.716422  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 40/60
	I1213 00:03:20.717858  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 41/60
	I1213 00:03:21.719132  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 42/60
	I1213 00:03:22.720605  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 43/60
	I1213 00:03:23.721824  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 44/60
	I1213 00:03:24.723506  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 45/60
	I1213 00:03:25.724868  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 46/60
	I1213 00:03:26.726140  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 47/60
	I1213 00:03:27.727569  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 48/60
	I1213 00:03:28.729141  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 49/60
	I1213 00:03:29.731344  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 50/60
	I1213 00:03:30.732691  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 51/60
	I1213 00:03:31.733942  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 52/60
	I1213 00:03:32.735214  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 53/60
	I1213 00:03:33.736577  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 54/60
	I1213 00:03:34.738186  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 55/60
	I1213 00:03:35.739549  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 56/60
	I1213 00:03:36.740886  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 57/60
	I1213 00:03:37.742259  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 58/60
	I1213 00:03:38.743749  176071 main.go:141] libmachine: (embed-certs-335807) Waiting for machine to stop 59/60
	I1213 00:03:39.744652  176071 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1213 00:03:39.744699  176071 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1213 00:03:39.746880  176071 out.go:177] 
	W1213 00:03:39.748166  176071 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1213 00:03:39.748183  176071 out.go:239] * 
	* 
	W1213 00:03:39.750638  176071 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 00:03:39.752029  176071 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-335807 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-335807 -n embed-certs-335807
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-335807 -n embed-certs-335807: exit status 3 (18.650961554s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 00:03:58.404719  176866 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.249:22: connect: no route to host
	E1213 00:03:58.404739  176866 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.249:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-335807" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-143586 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-143586 --alsologtostderr -v=3: exit status 82 (2m1.220475978s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-143586"  ...
	* Stopping node "no-preload-143586"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 00:01:57.043274  176266 out.go:296] Setting OutFile to fd 1 ...
	I1213 00:01:57.043534  176266 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:01:57.043542  176266 out.go:309] Setting ErrFile to fd 2...
	I1213 00:01:57.043548  176266 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:01:57.043741  176266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1213 00:01:57.043981  176266 out.go:303] Setting JSON to false
	I1213 00:01:57.044063  176266 mustload.go:65] Loading cluster: no-preload-143586
	I1213 00:01:57.044410  176266 config.go:182] Loaded profile config "no-preload-143586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:01:57.044508  176266 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/config.json ...
	I1213 00:01:57.044683  176266 mustload.go:65] Loading cluster: no-preload-143586
	I1213 00:01:57.044790  176266 config.go:182] Loaded profile config "no-preload-143586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:01:57.044817  176266 stop.go:39] StopHost: no-preload-143586
	I1213 00:01:57.045305  176266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:01:57.045361  176266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:01:57.059828  176266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45645
	I1213 00:01:57.060318  176266 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:01:57.061027  176266 main.go:141] libmachine: Using API Version  1
	I1213 00:01:57.061062  176266 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:01:57.061533  176266 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:01:57.063820  176266 out.go:177] * Stopping node "no-preload-143586"  ...
	I1213 00:01:57.065705  176266 main.go:141] libmachine: Stopping "no-preload-143586"...
	I1213 00:01:57.065724  176266 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:01:57.067679  176266 main.go:141] libmachine: (no-preload-143586) Calling .Stop
	I1213 00:01:57.070803  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 0/60
	I1213 00:01:58.072601  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 1/60
	I1213 00:01:59.075209  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 2/60
	I1213 00:02:00.076657  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 3/60
	I1213 00:02:01.078027  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 4/60
	I1213 00:02:02.080084  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 5/60
	I1213 00:02:03.081397  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 6/60
	I1213 00:02:04.083016  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 7/60
	I1213 00:02:05.084644  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 8/60
	I1213 00:02:06.087081  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 9/60
	I1213 00:02:07.089426  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 10/60
	I1213 00:02:08.090920  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 11/60
	I1213 00:02:09.092426  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 12/60
	I1213 00:02:10.094480  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 13/60
	I1213 00:02:11.096223  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 14/60
	I1213 00:02:12.098693  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 15/60
	I1213 00:02:13.100374  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 16/60
	I1213 00:02:14.101965  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 17/60
	I1213 00:02:15.103277  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 18/60
	I1213 00:02:16.104919  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 19/60
	I1213 00:02:17.107270  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 20/60
	I1213 00:02:18.108683  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 21/60
	I1213 00:02:19.110166  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 22/60
	I1213 00:02:20.111698  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 23/60
	I1213 00:02:21.113227  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 24/60
	I1213 00:02:22.115410  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 25/60
	I1213 00:02:23.116624  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 26/60
	I1213 00:02:24.118191  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 27/60
	I1213 00:02:25.119423  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 28/60
	I1213 00:02:26.120888  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 29/60
	I1213 00:02:27.123124  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 30/60
	I1213 00:02:28.124334  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 31/60
	I1213 00:02:29.125625  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 32/60
	I1213 00:02:30.127101  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 33/60
	I1213 00:02:31.128326  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 34/60
	I1213 00:02:32.130284  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 35/60
	I1213 00:02:33.131781  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 36/60
	I1213 00:02:34.133539  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 37/60
	I1213 00:02:35.134854  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 38/60
	I1213 00:02:36.136279  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 39/60
	I1213 00:02:37.138258  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 40/60
	I1213 00:02:38.139595  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 41/60
	I1213 00:02:39.141102  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 42/60
	I1213 00:02:40.142614  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 43/60
	I1213 00:02:41.143884  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 44/60
	I1213 00:02:42.145897  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 45/60
	I1213 00:02:43.147236  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 46/60
	I1213 00:02:44.148621  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 47/60
	I1213 00:02:45.149894  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 48/60
	I1213 00:02:46.151216  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 49/60
	I1213 00:02:47.153268  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 50/60
	I1213 00:02:48.154593  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 51/60
	I1213 00:02:49.155893  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 52/60
	I1213 00:02:50.157318  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 53/60
	I1213 00:02:51.158748  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 54/60
	I1213 00:02:52.160591  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 55/60
	I1213 00:02:53.161827  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 56/60
	I1213 00:02:54.163343  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 57/60
	I1213 00:02:55.164825  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 58/60
	I1213 00:02:56.166273  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 59/60
	I1213 00:02:57.167648  176266 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1213 00:02:57.167702  176266 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1213 00:02:57.167725  176266 retry.go:31] will retry after 909.105836ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1213 00:02:58.077777  176266 stop.go:39] StopHost: no-preload-143586
	I1213 00:02:58.078206  176266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:02:58.078271  176266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:02:58.092820  176266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I1213 00:02:58.093313  176266 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:02:58.093873  176266 main.go:141] libmachine: Using API Version  1
	I1213 00:02:58.093899  176266 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:02:58.094243  176266 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:02:58.096293  176266 out.go:177] * Stopping node "no-preload-143586"  ...
	I1213 00:02:58.097825  176266 main.go:141] libmachine: Stopping "no-preload-143586"...
	I1213 00:02:58.097847  176266 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:02:58.099587  176266 main.go:141] libmachine: (no-preload-143586) Calling .Stop
	I1213 00:02:58.102787  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 0/60
	I1213 00:02:59.104444  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 1/60
	I1213 00:03:00.105717  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 2/60
	I1213 00:03:01.107205  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 3/60
	I1213 00:03:02.108439  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 4/60
	I1213 00:03:03.110399  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 5/60
	I1213 00:03:04.111754  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 6/60
	I1213 00:03:05.113177  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 7/60
	I1213 00:03:06.114748  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 8/60
	I1213 00:03:07.116117  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 9/60
	I1213 00:03:08.118065  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 10/60
	I1213 00:03:09.119515  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 11/60
	I1213 00:03:10.121001  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 12/60
	I1213 00:03:11.122897  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 13/60
	I1213 00:03:12.124384  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 14/60
	I1213 00:03:13.126125  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 15/60
	I1213 00:03:14.127492  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 16/60
	I1213 00:03:15.129394  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 17/60
	I1213 00:03:16.130781  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 18/60
	I1213 00:03:17.132481  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 19/60
	I1213 00:03:18.134378  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 20/60
	I1213 00:03:19.135708  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 21/60
	I1213 00:03:20.137127  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 22/60
	I1213 00:03:21.138436  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 23/60
	I1213 00:03:22.139884  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 24/60
	I1213 00:03:23.141865  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 25/60
	I1213 00:03:24.143232  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 26/60
	I1213 00:03:25.144751  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 27/60
	I1213 00:03:26.146793  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 28/60
	I1213 00:03:27.148207  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 29/60
	I1213 00:03:28.150082  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 30/60
	I1213 00:03:29.151599  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 31/60
	I1213 00:03:30.153034  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 32/60
	I1213 00:03:31.154428  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 33/60
	I1213 00:03:32.155612  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 34/60
	I1213 00:03:33.157511  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 35/60
	I1213 00:03:34.158844  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 36/60
	I1213 00:03:35.160606  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 37/60
	I1213 00:03:36.161983  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 38/60
	I1213 00:03:37.163489  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 39/60
	I1213 00:03:38.165608  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 40/60
	I1213 00:03:39.167075  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 41/60
	I1213 00:03:40.168364  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 42/60
	I1213 00:03:41.169727  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 43/60
	I1213 00:03:42.171057  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 44/60
	I1213 00:03:43.172939  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 45/60
	I1213 00:03:44.174291  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 46/60
	I1213 00:03:45.175598  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 47/60
	I1213 00:03:46.176918  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 48/60
	I1213 00:03:47.178206  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 49/60
	I1213 00:03:48.179795  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 50/60
	I1213 00:03:49.181258  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 51/60
	I1213 00:03:50.182739  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 52/60
	I1213 00:03:51.184070  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 53/60
	I1213 00:03:52.185339  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 54/60
	I1213 00:03:53.186992  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 55/60
	I1213 00:03:54.188198  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 56/60
	I1213 00:03:55.189611  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 57/60
	I1213 00:03:56.190989  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 58/60
	I1213 00:03:57.192270  176266 main.go:141] libmachine: (no-preload-143586) Waiting for machine to stop 59/60
	I1213 00:03:58.193245  176266 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1213 00:03:58.193290  176266 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1213 00:03:58.195545  176266 out.go:177] 
	W1213 00:03:58.197110  176266 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1213 00:03:58.197138  176266 out.go:239] * 
	* 
	W1213 00:03:58.199380  176266 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 00:03:58.200868  176266 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-143586 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143586 -n no-preload-143586
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143586 -n no-preload-143586: exit status 3 (18.63298726s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 00:04:16.836746  176940 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.181:22: connect: no route to host
	E1213 00:04:16.836768  176940 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.181:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-143586" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-743278 --alsologtostderr -v=3
E1213 00:02:45.320337  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-743278 --alsologtostderr -v=3: exit status 82 (2m1.30286154s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-743278"  ...
	* Stopping node "default-k8s-diff-port-743278"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 00:02:07.843760  176360 out.go:296] Setting OutFile to fd 1 ...
	I1213 00:02:07.843892  176360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:02:07.843899  176360 out.go:309] Setting ErrFile to fd 2...
	I1213 00:02:07.843905  176360 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:02:07.844125  176360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1213 00:02:07.844426  176360 out.go:303] Setting JSON to false
	I1213 00:02:07.844573  176360 mustload.go:65] Loading cluster: default-k8s-diff-port-743278
	I1213 00:02:07.844988  176360 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:02:07.845075  176360 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/config.json ...
	I1213 00:02:07.845277  176360 mustload.go:65] Loading cluster: default-k8s-diff-port-743278
	I1213 00:02:07.845417  176360 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:02:07.845457  176360 stop.go:39] StopHost: default-k8s-diff-port-743278
	I1213 00:02:07.845861  176360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:02:07.845926  176360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:02:07.860225  176360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I1213 00:02:07.860658  176360 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:02:07.861191  176360 main.go:141] libmachine: Using API Version  1
	I1213 00:02:07.861213  176360 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:02:07.861520  176360 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:02:07.863866  176360 out.go:177] * Stopping node "default-k8s-diff-port-743278"  ...
	I1213 00:02:07.865543  176360 main.go:141] libmachine: Stopping "default-k8s-diff-port-743278"...
	I1213 00:02:07.865565  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:02:07.867274  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Stop
	I1213 00:02:07.870511  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 0/60
	I1213 00:02:08.872087  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 1/60
	I1213 00:02:09.873651  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 2/60
	I1213 00:02:10.875687  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 3/60
	I1213 00:02:11.877161  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 4/60
	I1213 00:02:12.879194  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 5/60
	I1213 00:02:13.880704  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 6/60
	I1213 00:02:14.882286  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 7/60
	I1213 00:02:15.883603  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 8/60
	I1213 00:02:16.885118  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 9/60
	I1213 00:02:17.886384  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 10/60
	I1213 00:02:18.887802  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 11/60
	I1213 00:02:19.889328  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 12/60
	I1213 00:02:20.890713  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 13/60
	I1213 00:02:21.892045  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 14/60
	I1213 00:02:22.893906  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 15/60
	I1213 00:02:23.895166  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 16/60
	I1213 00:02:24.896296  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 17/60
	I1213 00:02:25.897609  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 18/60
	I1213 00:02:26.898755  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 19/60
	I1213 00:02:27.900170  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 20/60
	I1213 00:02:28.901455  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 21/60
	I1213 00:02:29.902967  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 22/60
	I1213 00:02:30.904162  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 23/60
	I1213 00:02:31.905573  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 24/60
	I1213 00:02:32.907755  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 25/60
	I1213 00:02:33.909391  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 26/60
	I1213 00:02:34.910684  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 27/60
	I1213 00:02:35.911907  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 28/60
	I1213 00:02:36.913241  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 29/60
	I1213 00:02:37.915232  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 30/60
	I1213 00:02:38.916674  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 31/60
	I1213 00:02:39.917821  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 32/60
	I1213 00:02:40.919287  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 33/60
	I1213 00:02:41.920470  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 34/60
	I1213 00:02:42.922383  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 35/60
	I1213 00:02:43.923783  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 36/60
	I1213 00:02:44.925173  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 37/60
	I1213 00:02:45.926356  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 38/60
	I1213 00:02:46.927466  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 39/60
	I1213 00:02:47.929851  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 40/60
	I1213 00:02:48.931169  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 41/60
	I1213 00:02:49.932935  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 42/60
	I1213 00:02:50.934340  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 43/60
	I1213 00:02:51.935826  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 44/60
	I1213 00:02:52.937878  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 45/60
	I1213 00:02:53.939213  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 46/60
	I1213 00:02:54.940626  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 47/60
	I1213 00:02:55.942976  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 48/60
	I1213 00:02:56.944300  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 49/60
	I1213 00:02:57.946139  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 50/60
	I1213 00:02:58.947300  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 51/60
	I1213 00:02:59.948823  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 52/60
	I1213 00:03:00.950023  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 53/60
	I1213 00:03:01.951480  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 54/60
	I1213 00:03:02.953385  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 55/60
	I1213 00:03:03.955077  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 56/60
	I1213 00:03:04.956491  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 57/60
	I1213 00:03:05.958156  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 58/60
	I1213 00:03:06.959713  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 59/60
	I1213 00:03:07.961025  176360 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1213 00:03:07.961090  176360 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1213 00:03:07.961108  176360 retry.go:31] will retry after 993.479584ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1213 00:03:08.955222  176360 stop.go:39] StopHost: default-k8s-diff-port-743278
	I1213 00:03:08.955642  176360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:03:08.955699  176360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:03:08.970044  176360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I1213 00:03:08.970457  176360 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:03:08.970996  176360 main.go:141] libmachine: Using API Version  1
	I1213 00:03:08.971025  176360 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:03:08.971396  176360 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:03:08.973427  176360 out.go:177] * Stopping node "default-k8s-diff-port-743278"  ...
	I1213 00:03:08.974923  176360 main.go:141] libmachine: Stopping "default-k8s-diff-port-743278"...
	I1213 00:03:08.974947  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:03:08.976724  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Stop
	I1213 00:03:08.980978  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 0/60
	I1213 00:03:09.982586  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 1/60
	I1213 00:03:10.983791  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 2/60
	I1213 00:03:11.985418  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 3/60
	I1213 00:03:12.986879  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 4/60
	I1213 00:03:13.988727  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 5/60
	I1213 00:03:14.990151  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 6/60
	I1213 00:03:15.991600  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 7/60
	I1213 00:03:16.993031  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 8/60
	I1213 00:03:17.994451  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 9/60
	I1213 00:03:18.996422  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 10/60
	I1213 00:03:19.997980  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 11/60
	I1213 00:03:20.999302  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 12/60
	I1213 00:03:22.000906  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 13/60
	I1213 00:03:23.002140  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 14/60
	I1213 00:03:24.004793  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 15/60
	I1213 00:03:25.006479  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 16/60
	I1213 00:03:26.007893  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 17/60
	I1213 00:03:27.009243  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 18/60
	I1213 00:03:28.011106  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 19/60
	I1213 00:03:29.012486  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 20/60
	I1213 00:03:30.013900  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 21/60
	I1213 00:03:31.015315  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 22/60
	I1213 00:03:32.016726  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 23/60
	I1213 00:03:33.018055  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 24/60
	I1213 00:03:34.019751  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 25/60
	I1213 00:03:35.021258  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 26/60
	I1213 00:03:36.022577  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 27/60
	I1213 00:03:37.023712  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 28/60
	I1213 00:03:38.025067  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 29/60
	I1213 00:03:39.027072  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 30/60
	I1213 00:03:40.028298  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 31/60
	I1213 00:03:41.029855  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 32/60
	I1213 00:03:42.031243  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 33/60
	I1213 00:03:43.032756  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 34/60
	I1213 00:03:44.034609  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 35/60
	I1213 00:03:45.035899  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 36/60
	I1213 00:03:46.037402  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 37/60
	I1213 00:03:47.038769  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 38/60
	I1213 00:03:48.040205  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 39/60
	I1213 00:03:49.041816  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 40/60
	I1213 00:03:50.043288  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 41/60
	I1213 00:03:51.044585  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 42/60
	I1213 00:03:52.046093  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 43/60
	I1213 00:03:53.047290  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 44/60
	I1213 00:03:54.048885  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 45/60
	I1213 00:03:55.050210  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 46/60
	I1213 00:03:56.051632  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 47/60
	I1213 00:03:57.052855  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 48/60
	I1213 00:03:58.054243  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 49/60
	I1213 00:03:59.056311  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 50/60
	I1213 00:04:00.057733  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 51/60
	I1213 00:04:01.059785  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 52/60
	I1213 00:04:02.061125  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 53/60
	I1213 00:04:03.062781  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 54/60
	I1213 00:04:04.064873  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 55/60
	I1213 00:04:05.067303  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 56/60
	I1213 00:04:06.068759  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 57/60
	I1213 00:04:07.070190  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 58/60
	I1213 00:04:08.071777  176360 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for machine to stop 59/60
	I1213 00:04:09.072789  176360 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1213 00:04:09.072841  176360 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1213 00:04:09.075166  176360 out.go:177] 
	W1213 00:04:09.076785  176360 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1213 00:04:09.076804  176360 out.go:239] * 
	* 
	W1213 00:04:09.079217  176360 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 00:04:09.080629  176360 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-743278 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-743278 -n default-k8s-diff-port-743278
E1213 00:04:10.663444  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-743278 -n default-k8s-diff-port-743278: exit status 3 (18.505893419s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 00:04:27.588757  177092 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host
	E1213 00:04:27.588779  177092 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-743278" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-508612 -n old-k8s-version-508612
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-508612 -n old-k8s-version-508612: exit status 3 (3.19973191s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 00:03:26.532804  176712 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host
	E1213 00:03:26.532825  176712 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-508612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-508612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153411261s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-508612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-508612 -n old-k8s-version-508612
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-508612 -n old-k8s-version-508612: exit status 3 (3.062375518s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 00:03:35.748869  176783 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host
	E1213 00:03:35.748918  176783 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-508612" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-335807 -n embed-certs-335807
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-335807 -n embed-certs-335807: exit status 3 (3.167980475s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 00:04:01.572780  176970 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.249:22: connect: no route to host
	E1213 00:04:01.572802  176970 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.249:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-335807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-335807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152903539s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.249:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-335807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-335807 -n embed-certs-335807
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-335807 -n embed-certs-335807: exit status 3 (3.063071288s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 00:04:10.788864  177062 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.249:22: connect: no route to host
	E1213 00:04:10.788895  177062 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.249:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-335807" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143586 -n no-preload-143586
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143586 -n no-preload-143586: exit status 3 (3.167876507s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 00:04:20.004867  177179 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.181:22: connect: no route to host
	E1213 00:04:20.004919  177179 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.181:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-143586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-143586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153278082s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.181:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-143586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143586 -n no-preload-143586
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143586 -n no-preload-143586: exit status 3 (3.062642882s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 00:04:29.220808  177236 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.181:22: connect: no route to host
	E1213 00:04:29.220827  177236 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.181:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-143586" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-743278 -n default-k8s-diff-port-743278
E1213 00:04:27.617269  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-743278 -n default-k8s-diff-port-743278: exit status 3 (3.168196606s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 00:04:30.756782  177277 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host
	E1213 00:04:30.756805  177277 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-743278 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-743278 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152780298s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-743278 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-743278 -n default-k8s-diff-port-743278
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-743278 -n default-k8s-diff-port-743278: exit status 3 (3.062848505s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 00:04:39.972853  177380 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host
	E1213 00:04:39.972876  177380 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-743278" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-335807 -n embed-certs-335807
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-13 00:23:04.520374875 +0000 UTC m=+5308.076512420
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-335807 -n embed-certs-335807
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-335807 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-335807 logs -n 25: (1.694531879s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-380248                              | cert-expiration-380248       | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p pause-042245                                        | pause-042245                 | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343019 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	|         | disable-driver-mounts-343019                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:01 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-508612        | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-335807            | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143586             | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-743278  | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC | 13 Dec 23 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC |                     |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-508612             | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC | 13 Dec 23 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-335807                 | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143586                  | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-743278       | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/13 00:04:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 00:04:40.034430  177409 out.go:296] Setting OutFile to fd 1 ...
	I1213 00:04:40.034592  177409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:04:40.034601  177409 out.go:309] Setting ErrFile to fd 2...
	I1213 00:04:40.034606  177409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:04:40.034805  177409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1213 00:04:40.035357  177409 out.go:303] Setting JSON to false
	I1213 00:04:40.036280  177409 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10028,"bootTime":1702415852,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 00:04:40.036342  177409 start.go:138] virtualization: kvm guest
	I1213 00:04:40.038707  177409 out.go:177] * [default-k8s-diff-port-743278] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 00:04:40.040139  177409 out.go:177]   - MINIKUBE_LOCATION=17777
	I1213 00:04:40.040129  177409 notify.go:220] Checking for updates...
	I1213 00:04:40.041788  177409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 00:04:40.043246  177409 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:04:40.044627  177409 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1213 00:04:40.046091  177409 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 00:04:40.047562  177409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 00:04:40.049427  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:04:40.049930  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:04:40.049979  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:04:40.064447  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44247
	I1213 00:04:40.064825  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:04:40.065333  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:04:40.065352  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:04:40.065686  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:04:40.065850  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:04:40.066092  177409 driver.go:392] Setting default libvirt URI to qemu:///system
	I1213 00:04:40.066357  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:04:40.066389  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:04:40.080217  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38499
	I1213 00:04:40.080643  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:04:40.081072  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:04:40.081098  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:04:40.081436  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:04:40.081622  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:04:40.114108  177409 out.go:177] * Using the kvm2 driver based on existing profile
	I1213 00:04:40.115585  177409 start.go:298] selected driver: kvm2
	I1213 00:04:40.115603  177409 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:04:40.115714  177409 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 00:04:40.116379  177409 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:04:40.116485  177409 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 00:04:40.131964  177409 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1213 00:04:40.132324  177409 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 00:04:40.132392  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:04:40.132405  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:04:40.132416  177409 start_flags.go:323] config:
	{Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-74327
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:04:40.132599  177409 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:04:40.135330  177409 out.go:177] * Starting control plane node default-k8s-diff-port-743278 in cluster default-k8s-diff-port-743278
	I1213 00:04:36.772718  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:39.844694  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:40.136912  177409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:04:40.136959  177409 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1213 00:04:40.136972  177409 cache.go:56] Caching tarball of preloaded images
	I1213 00:04:40.137094  177409 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 00:04:40.137108  177409 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1213 00:04:40.137215  177409 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/config.json ...
	I1213 00:04:40.137413  177409 start.go:365] acquiring machines lock for default-k8s-diff-port-743278: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:04:45.924700  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:48.996768  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:55.076732  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:58.148779  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:04.228721  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:07.300700  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:13.380743  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:16.452690  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:22.532695  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:25.604771  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:31.684681  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:34.756720  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:40.836697  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:43.908711  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:49.988729  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:53.060691  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:59.140737  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:02.212709  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:08.292717  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:11.364746  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:17.444722  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:20.516796  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:26.596650  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:29.668701  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:35.748723  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:38.820688  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:44.900719  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:47.972683  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:54.052708  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:57.124684  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:03.204728  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:06.276720  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:12.356681  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:15.428743  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:21.508696  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:24.580749  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:30.660747  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:33.732746  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:39.812738  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:42.884767  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:48.964744  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:52.036691  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:58.116726  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:01.188638  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:07.268756  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:10.340725  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:13.345031  177122 start.go:369] acquired machines lock for "embed-certs-335807" in 4m2.39512191s
	I1213 00:08:13.345120  177122 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:08:13.345129  177122 fix.go:54] fixHost starting: 
	I1213 00:08:13.345524  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:08:13.345564  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:08:13.360333  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I1213 00:08:13.360832  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:08:13.361366  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:08:13.361390  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:08:13.361769  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:08:13.361941  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:13.362104  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:08:13.363919  177122 fix.go:102] recreateIfNeeded on embed-certs-335807: state=Stopped err=<nil>
	I1213 00:08:13.363938  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	W1213 00:08:13.364125  177122 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:08:13.366077  177122 out.go:177] * Restarting existing kvm2 VM for "embed-certs-335807" ...
	I1213 00:08:13.342763  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:08:13.342804  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:08:13.344878  176813 machine.go:91] provisioned docker machine in 4m37.409041046s
	I1213 00:08:13.344942  176813 fix.go:56] fixHost completed within 4m37.430106775s
	I1213 00:08:13.344949  176813 start.go:83] releasing machines lock for "old-k8s-version-508612", held for 4m37.430132032s
	W1213 00:08:13.344965  176813 start.go:694] error starting host: provision: host is not running
	W1213 00:08:13.345107  176813 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1213 00:08:13.345120  176813 start.go:709] Will try again in 5 seconds ...
	I1213 00:08:13.367310  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Start
	I1213 00:08:13.367451  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring networks are active...
	I1213 00:08:13.368551  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring network default is active
	I1213 00:08:13.368936  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring network mk-embed-certs-335807 is active
	I1213 00:08:13.369290  177122 main.go:141] libmachine: (embed-certs-335807) Getting domain xml...
	I1213 00:08:13.369993  177122 main.go:141] libmachine: (embed-certs-335807) Creating domain...
	I1213 00:08:14.617766  177122 main.go:141] libmachine: (embed-certs-335807) Waiting to get IP...
	I1213 00:08:14.618837  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:14.619186  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:14.619322  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:14.619202  177987 retry.go:31] will retry after 226.757968ms: waiting for machine to come up
	I1213 00:08:14.847619  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:14.847962  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:14.847996  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:14.847892  177987 retry.go:31] will retry after 390.063287ms: waiting for machine to come up
	I1213 00:08:15.239515  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:15.239906  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:15.239939  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:15.239845  177987 retry.go:31] will retry after 341.644988ms: waiting for machine to come up
	I1213 00:08:15.583408  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:15.583848  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:15.583878  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:15.583796  177987 retry.go:31] will retry after 420.722896ms: waiting for machine to come up
	I1213 00:08:18.346616  176813 start.go:365] acquiring machines lock for old-k8s-version-508612: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:08:16.006364  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:16.006767  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:16.006803  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:16.006713  177987 retry.go:31] will retry after 548.041925ms: waiting for machine to come up
	I1213 00:08:16.556444  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:16.556880  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:16.556912  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:16.556833  177987 retry.go:31] will retry after 862.959808ms: waiting for machine to come up
	I1213 00:08:17.421147  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:17.421596  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:17.421630  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:17.421544  177987 retry.go:31] will retry after 1.085782098s: waiting for machine to come up
	I1213 00:08:18.509145  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:18.509595  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:18.509619  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:18.509556  177987 retry.go:31] will retry after 1.303432656s: waiting for machine to come up
	I1213 00:08:19.814985  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:19.815430  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:19.815473  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:19.815367  177987 retry.go:31] will retry after 1.337474429s: waiting for machine to come up
	I1213 00:08:21.154792  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:21.155213  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:21.155236  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:21.155165  177987 retry.go:31] will retry after 2.104406206s: waiting for machine to come up
	I1213 00:08:23.262615  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:23.263144  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:23.263174  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:23.263066  177987 retry.go:31] will retry after 2.064696044s: waiting for machine to come up
	I1213 00:08:25.330105  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:25.330586  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:25.330621  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:25.330544  177987 retry.go:31] will retry after 2.270537288s: waiting for machine to come up
	I1213 00:08:27.602267  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:27.602787  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:27.602810  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:27.602758  177987 retry.go:31] will retry after 3.020844169s: waiting for machine to come up
	I1213 00:08:30.626232  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:30.626696  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:30.626731  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:30.626645  177987 retry.go:31] will retry after 5.329279261s: waiting for machine to come up
	I1213 00:08:37.405257  177307 start.go:369] acquired machines lock for "no-preload-143586" in 4m8.02482326s
	I1213 00:08:37.405329  177307 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:08:37.405340  177307 fix.go:54] fixHost starting: 
	I1213 00:08:37.405777  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:08:37.405830  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:08:37.422055  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41433
	I1213 00:08:37.422558  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:08:37.423112  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:08:37.423143  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:08:37.423462  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:08:37.423650  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:08:37.423795  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:08:37.425302  177307 fix.go:102] recreateIfNeeded on no-preload-143586: state=Stopped err=<nil>
	I1213 00:08:37.425345  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	W1213 00:08:37.425519  177307 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:08:37.428723  177307 out.go:177] * Restarting existing kvm2 VM for "no-preload-143586" ...
	I1213 00:08:35.958579  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.959166  177122 main.go:141] libmachine: (embed-certs-335807) Found IP for machine: 192.168.61.249
	I1213 00:08:35.959200  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has current primary IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.959212  177122 main.go:141] libmachine: (embed-certs-335807) Reserving static IP address...
	I1213 00:08:35.959676  177122 main.go:141] libmachine: (embed-certs-335807) Reserved static IP address: 192.168.61.249
	I1213 00:08:35.959731  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "embed-certs-335807", mac: "52:54:00:20:1b:c0", ip: "192.168.61.249"} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:35.959746  177122 main.go:141] libmachine: (embed-certs-335807) Waiting for SSH to be available...
	I1213 00:08:35.959779  177122 main.go:141] libmachine: (embed-certs-335807) DBG | skip adding static IP to network mk-embed-certs-335807 - found existing host DHCP lease matching {name: "embed-certs-335807", mac: "52:54:00:20:1b:c0", ip: "192.168.61.249"}
	I1213 00:08:35.959795  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Getting to WaitForSSH function...
	I1213 00:08:35.962033  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.962419  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:35.962448  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.962552  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Using SSH client type: external
	I1213 00:08:35.962575  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa (-rw-------)
	I1213 00:08:35.962608  177122 main.go:141] libmachine: (embed-certs-335807) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:08:35.962624  177122 main.go:141] libmachine: (embed-certs-335807) DBG | About to run SSH command:
	I1213 00:08:35.962637  177122 main.go:141] libmachine: (embed-certs-335807) DBG | exit 0
	I1213 00:08:36.056268  177122 main.go:141] libmachine: (embed-certs-335807) DBG | SSH cmd err, output: <nil>: 
	I1213 00:08:36.056649  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetConfigRaw
	I1213 00:08:36.057283  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:36.060244  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.060656  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.060705  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.060930  177122 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/config.json ...
	I1213 00:08:36.061132  177122 machine.go:88] provisioning docker machine ...
	I1213 00:08:36.061150  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:36.061386  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.061569  177122 buildroot.go:166] provisioning hostname "embed-certs-335807"
	I1213 00:08:36.061593  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.061737  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.063997  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.064352  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.064374  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.064532  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.064743  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.064899  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.065039  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.065186  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.065556  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.065575  177122 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-335807 && echo "embed-certs-335807" | sudo tee /etc/hostname
	I1213 00:08:36.199697  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-335807
	
	I1213 00:08:36.199733  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.202879  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.203289  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.203312  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.203495  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.203705  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.203845  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.203968  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.204141  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.204545  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.204564  177122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-335807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-335807/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-335807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:08:36.336285  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:08:36.336315  177122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:08:36.336337  177122 buildroot.go:174] setting up certificates
	I1213 00:08:36.336350  177122 provision.go:83] configureAuth start
	I1213 00:08:36.336364  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.336658  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:36.339327  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.339695  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.339727  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.339861  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.342106  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.342485  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.342506  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.342613  177122 provision.go:138] copyHostCerts
	I1213 00:08:36.342699  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:08:36.342711  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:08:36.342795  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:08:36.342910  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:08:36.342928  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:08:36.342962  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:08:36.343051  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:08:36.343061  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:08:36.343099  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:08:36.343185  177122 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.embed-certs-335807 san=[192.168.61.249 192.168.61.249 localhost 127.0.0.1 minikube embed-certs-335807]
	I1213 00:08:36.680595  177122 provision.go:172] copyRemoteCerts
	I1213 00:08:36.680687  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:08:36.680715  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.683411  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.683664  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.683690  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.683826  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.684044  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.684217  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.684370  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:36.773978  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:08:36.795530  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1213 00:08:36.817104  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 00:08:36.838510  177122 provision.go:86] duration metric: configureAuth took 502.141764ms
	I1213 00:08:36.838544  177122 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:08:36.838741  177122 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:08:36.838818  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.841372  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.841720  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.841759  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.841875  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.842095  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.842276  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.842447  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.842593  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.843043  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.843069  177122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:08:37.150317  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:08:37.150364  177122 machine.go:91] provisioned docker machine in 1.089215763s
	I1213 00:08:37.150378  177122 start.go:300] post-start starting for "embed-certs-335807" (driver="kvm2")
	I1213 00:08:37.150391  177122 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:08:37.150424  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.150800  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:08:37.150829  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.153552  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.153920  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.153958  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.154075  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.154268  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.154406  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.154562  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.245839  177122 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:08:37.249929  177122 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:08:37.249959  177122 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:08:37.250029  177122 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:08:37.250114  177122 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:08:37.250202  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:08:37.258062  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:08:37.280034  177122 start.go:303] post-start completed in 129.642247ms
	I1213 00:08:37.280060  177122 fix.go:56] fixHost completed within 23.934930358s
	I1213 00:08:37.280085  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.282572  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.282861  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.282903  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.283059  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.283333  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.283516  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.283694  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.283898  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:37.284217  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:37.284229  177122 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:08:37.405050  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426117.378231894
	
	I1213 00:08:37.405077  177122 fix.go:206] guest clock: 1702426117.378231894
	I1213 00:08:37.405099  177122 fix.go:219] Guest: 2023-12-13 00:08:37.378231894 +0000 UTC Remote: 2023-12-13 00:08:37.280064166 +0000 UTC m=+266.483341520 (delta=98.167728ms)
	I1213 00:08:37.405127  177122 fix.go:190] guest clock delta is within tolerance: 98.167728ms
	I1213 00:08:37.405137  177122 start.go:83] releasing machines lock for "embed-certs-335807", held for 24.060057368s
	I1213 00:08:37.405161  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.405417  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:37.408128  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.408513  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.408559  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.408681  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409264  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409449  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409542  177122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:08:37.409611  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.409647  177122 ssh_runner.go:195] Run: cat /version.json
	I1213 00:08:37.409673  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.412390  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412733  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.412764  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412795  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412910  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.413104  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.413187  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.413224  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.413263  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.413462  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.413455  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.413633  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.413758  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.413899  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.531948  177122 ssh_runner.go:195] Run: systemctl --version
	I1213 00:08:37.537555  177122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:08:37.677429  177122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:08:37.684043  177122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:08:37.684115  177122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:08:37.702304  177122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:08:37.702327  177122 start.go:475] detecting cgroup driver to use...
	I1213 00:08:37.702388  177122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:08:37.716601  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:08:37.728516  177122 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:08:37.728571  177122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:08:37.740595  177122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:08:37.753166  177122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:08:37.853095  177122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:08:37.970696  177122 docker.go:219] disabling docker service ...
	I1213 00:08:37.970769  177122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:08:37.983625  177122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:08:37.994924  177122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:08:38.110057  177122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:08:38.229587  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:08:38.243052  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:08:38.260480  177122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:08:38.260547  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.269442  177122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:08:38.269508  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.278569  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.287680  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.296798  177122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:08:38.306247  177122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:08:38.314189  177122 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:08:38.314251  177122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:08:38.326702  177122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:08:38.335111  177122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:08:38.435024  177122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:08:38.600232  177122 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:08:38.600322  177122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:08:38.606384  177122 start.go:543] Will wait 60s for crictl version
	I1213 00:08:38.606446  177122 ssh_runner.go:195] Run: which crictl
	I1213 00:08:38.611180  177122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:08:38.654091  177122 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:08:38.654197  177122 ssh_runner.go:195] Run: crio --version
	I1213 00:08:38.705615  177122 ssh_runner.go:195] Run: crio --version
	I1213 00:08:38.755387  177122 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1213 00:08:37.430037  177307 main.go:141] libmachine: (no-preload-143586) Calling .Start
	I1213 00:08:37.430266  177307 main.go:141] libmachine: (no-preload-143586) Ensuring networks are active...
	I1213 00:08:37.430931  177307 main.go:141] libmachine: (no-preload-143586) Ensuring network default is active
	I1213 00:08:37.431290  177307 main.go:141] libmachine: (no-preload-143586) Ensuring network mk-no-preload-143586 is active
	I1213 00:08:37.431640  177307 main.go:141] libmachine: (no-preload-143586) Getting domain xml...
	I1213 00:08:37.432281  177307 main.go:141] libmachine: (no-preload-143586) Creating domain...
	I1213 00:08:38.686491  177307 main.go:141] libmachine: (no-preload-143586) Waiting to get IP...
	I1213 00:08:38.687472  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:38.688010  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:38.688095  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:38.687986  178111 retry.go:31] will retry after 246.453996ms: waiting for machine to come up
	I1213 00:08:38.936453  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:38.936931  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:38.936963  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:38.936879  178111 retry.go:31] will retry after 317.431088ms: waiting for machine to come up
	I1213 00:08:39.256641  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:39.257217  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:39.257241  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:39.257165  178111 retry.go:31] will retry after 379.635912ms: waiting for machine to come up
	I1213 00:08:38.757019  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:38.760125  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:38.760684  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:38.760720  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:38.760949  177122 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1213 00:08:38.765450  177122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:08:38.778459  177122 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:08:38.778539  177122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:08:38.819215  177122 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1213 00:08:38.819281  177122 ssh_runner.go:195] Run: which lz4
	I1213 00:08:38.823481  177122 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:08:38.829034  177122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:08:38.829069  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1213 00:08:40.721922  177122 crio.go:444] Took 1.898469 seconds to copy over tarball
	I1213 00:08:40.721984  177122 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:08:39.638611  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:39.639108  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:39.639137  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:39.639067  178111 retry.go:31] will retry after 596.16391ms: waiting for machine to come up
	I1213 00:08:40.237504  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:40.237957  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:40.237990  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:40.237911  178111 retry.go:31] will retry after 761.995315ms: waiting for machine to come up
	I1213 00:08:41.002003  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:41.002388  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:41.002413  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:41.002329  178111 retry.go:31] will retry after 693.578882ms: waiting for machine to come up
	I1213 00:08:41.697126  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:41.697617  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:41.697652  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:41.697555  178111 retry.go:31] will retry after 1.050437275s: waiting for machine to come up
	I1213 00:08:42.749227  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:42.749833  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:42.749866  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:42.749782  178111 retry.go:31] will retry after 1.175916736s: waiting for machine to come up
	I1213 00:08:43.927564  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:43.928115  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:43.928144  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:43.928065  178111 retry.go:31] will retry after 1.590924957s: waiting for machine to come up
	I1213 00:08:43.767138  177122 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.045121634s)
	I1213 00:08:43.767169  177122 crio.go:451] Took 3.045224 seconds to extract the tarball
	I1213 00:08:43.767178  177122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:08:43.809047  177122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:08:43.873704  177122 crio.go:496] all images are preloaded for cri-o runtime.
	I1213 00:08:43.873726  177122 cache_images.go:84] Images are preloaded, skipping loading
	I1213 00:08:43.873792  177122 ssh_runner.go:195] Run: crio config
	I1213 00:08:43.941716  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:08:43.941747  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:08:43.941774  177122 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:08:43.941800  177122 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.249 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-335807 NodeName:embed-certs-335807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:08:43.942026  177122 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-335807"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:08:43.942123  177122 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-335807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-335807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:08:43.942201  177122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1213 00:08:43.951461  177122 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:08:43.951550  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:08:43.960491  177122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1213 00:08:43.976763  177122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:08:43.993725  177122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1213 00:08:44.010795  177122 ssh_runner.go:195] Run: grep 192.168.61.249	control-plane.minikube.internal$ /etc/hosts
	I1213 00:08:44.014668  177122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.249	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:08:44.027339  177122 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807 for IP: 192.168.61.249
	I1213 00:08:44.027376  177122 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:08:44.027550  177122 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:08:44.027617  177122 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:08:44.027701  177122 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/client.key
	I1213 00:08:44.027786  177122 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.key.ba34ddd8
	I1213 00:08:44.027844  177122 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.key
	I1213 00:08:44.027987  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:08:44.028035  177122 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:08:44.028056  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:08:44.028088  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:08:44.028129  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:08:44.028158  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:08:44.028220  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:08:44.029033  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:08:44.054023  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 00:08:44.078293  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:08:44.102083  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 00:08:44.126293  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:08:44.149409  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:08:44.172887  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:08:44.195662  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:08:44.218979  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:08:44.241598  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:08:44.265251  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:08:44.290073  177122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:08:44.306685  177122 ssh_runner.go:195] Run: openssl version
	I1213 00:08:44.312422  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:08:44.322405  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.327215  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.327296  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.333427  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:08:44.343574  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:08:44.353981  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.358997  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.359051  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.364654  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:08:44.375147  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:08:44.384900  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.389492  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.389553  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.395105  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:08:44.404656  177122 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:08:44.409852  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:08:44.415755  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:08:44.421911  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:08:44.428119  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:08:44.435646  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:08:44.441692  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:08:44.447849  177122 kubeadm.go:404] StartCluster: {Name:embed-certs-335807 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-335807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.249 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:08:44.447976  177122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:08:44.448025  177122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:08:44.495646  177122 cri.go:89] found id: ""
	I1213 00:08:44.495744  177122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:08:44.506405  177122 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:08:44.506454  177122 kubeadm.go:636] restartCluster start
	I1213 00:08:44.506515  177122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:08:44.516110  177122 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:44.517275  177122 kubeconfig.go:92] found "embed-certs-335807" server: "https://192.168.61.249:8443"
	I1213 00:08:44.519840  177122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:08:44.529214  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:44.529294  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:44.540415  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:44.540447  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:44.540497  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:44.552090  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.052810  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:45.052890  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:45.066300  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.552897  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:45.553031  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:45.564969  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.520191  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:45.520729  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:45.520754  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:45.520662  178111 retry.go:31] will retry after 1.407916355s: waiting for machine to come up
	I1213 00:08:46.930655  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:46.931073  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:46.931138  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:46.930993  178111 retry.go:31] will retry after 2.033169427s: waiting for machine to come up
	I1213 00:08:48.966888  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:48.967318  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:48.967351  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:48.967253  178111 retry.go:31] will retry after 2.277791781s: waiting for machine to come up
	I1213 00:08:46.052915  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:46.053025  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:46.068633  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:46.552208  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:46.552317  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:46.565045  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:47.052533  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:47.052627  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:47.068457  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:47.553040  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:47.553127  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:47.564657  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:48.052228  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:48.052322  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:48.068950  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:48.553171  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:48.553256  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:48.568868  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:49.052389  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:49.052515  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:49.064674  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:49.552894  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:49.553012  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:49.564302  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:50.052843  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:50.052941  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:50.064617  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:50.553231  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:50.553316  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:50.567944  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:51.247665  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:51.248141  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:51.248175  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:51.248098  178111 retry.go:31] will retry after 4.234068925s: waiting for machine to come up
	I1213 00:08:51.052574  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:51.052700  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:51.069491  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:51.553152  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:51.553234  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:51.565331  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:52.052984  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:52.053064  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:52.064748  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:52.552257  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:52.552362  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:52.563626  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:53.053196  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:53.053287  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:53.064273  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:53.552319  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:53.552423  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:53.563587  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:54.053227  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:54.053331  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:54.065636  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:54.530249  177122 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:08:54.530301  177122 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:08:54.530330  177122 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:08:54.530424  177122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:08:54.570200  177122 cri.go:89] found id: ""
	I1213 00:08:54.570275  177122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:08:54.586722  177122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:08:54.596240  177122 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:08:54.596313  177122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:08:54.605202  177122 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:08:54.605226  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:54.718619  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:55.483563  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:55.483985  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:55.484024  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:55.483927  178111 retry.go:31] will retry after 5.446962632s: waiting for machine to come up
	I1213 00:08:55.944250  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.225592219s)
	I1213 00:08:55.944282  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.132294  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.214859  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.297313  177122 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:08:56.297421  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:56.315946  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:56.830228  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:57.329695  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:57.830336  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.329610  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.829933  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.853978  177122 api_server.go:72] duration metric: took 2.556667404s to wait for apiserver process to appear ...
	I1213 00:08:58.854013  177122 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:08:58.854054  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:02.161624  177409 start.go:369] acquired machines lock for "default-k8s-diff-port-743278" in 4m22.024178516s
	I1213 00:09:02.161693  177409 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:09:02.161704  177409 fix.go:54] fixHost starting: 
	I1213 00:09:02.162127  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:02.162174  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:02.179045  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41749
	I1213 00:09:02.179554  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:02.180099  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:02.180131  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:02.180461  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:02.180658  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:02.180795  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:02.182459  177409 fix.go:102] recreateIfNeeded on default-k8s-diff-port-743278: state=Stopped err=<nil>
	I1213 00:09:02.182498  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	W1213 00:09:02.182657  177409 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:09:02.184934  177409 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-743278" ...
	I1213 00:09:00.933522  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.934020  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has current primary IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.934046  177307 main.go:141] libmachine: (no-preload-143586) Found IP for machine: 192.168.50.181
	I1213 00:09:00.934058  177307 main.go:141] libmachine: (no-preload-143586) Reserving static IP address...
	I1213 00:09:00.934538  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "no-preload-143586", mac: "52:54:00:4d:da:7b", ip: "192.168.50.181"} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:00.934573  177307 main.go:141] libmachine: (no-preload-143586) DBG | skip adding static IP to network mk-no-preload-143586 - found existing host DHCP lease matching {name: "no-preload-143586", mac: "52:54:00:4d:da:7b", ip: "192.168.50.181"}
	I1213 00:09:00.934592  177307 main.go:141] libmachine: (no-preload-143586) Reserved static IP address: 192.168.50.181
	I1213 00:09:00.934601  177307 main.go:141] libmachine: (no-preload-143586) Waiting for SSH to be available...
	I1213 00:09:00.934610  177307 main.go:141] libmachine: (no-preload-143586) DBG | Getting to WaitForSSH function...
	I1213 00:09:00.936830  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.937236  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:00.937283  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.937399  177307 main.go:141] libmachine: (no-preload-143586) DBG | Using SSH client type: external
	I1213 00:09:00.937421  177307 main.go:141] libmachine: (no-preload-143586) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa (-rw-------)
	I1213 00:09:00.937458  177307 main.go:141] libmachine: (no-preload-143586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:00.937473  177307 main.go:141] libmachine: (no-preload-143586) DBG | About to run SSH command:
	I1213 00:09:00.937485  177307 main.go:141] libmachine: (no-preload-143586) DBG | exit 0
	I1213 00:09:01.024658  177307 main.go:141] libmachine: (no-preload-143586) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:01.024996  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetConfigRaw
	I1213 00:09:01.025611  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:01.028062  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.028471  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.028509  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.028734  177307 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/config.json ...
	I1213 00:09:01.028955  177307 machine.go:88] provisioning docker machine ...
	I1213 00:09:01.028980  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:01.029193  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.029394  177307 buildroot.go:166] provisioning hostname "no-preload-143586"
	I1213 00:09:01.029409  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.029580  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.031949  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.032273  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.032305  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.032413  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.032599  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.032758  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.032877  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.033036  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.033377  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.033395  177307 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-143586 && echo "no-preload-143586" | sudo tee /etc/hostname
	I1213 00:09:01.157420  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-143586
	
	I1213 00:09:01.157461  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.160181  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.160498  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.160535  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.160654  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.160915  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.161104  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.161299  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.161469  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.161785  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.161811  177307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-143586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-143586/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-143586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:01.287746  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:01.287776  177307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:01.287835  177307 buildroot.go:174] setting up certificates
	I1213 00:09:01.287844  177307 provision.go:83] configureAuth start
	I1213 00:09:01.287857  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.288156  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:01.290754  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.291147  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.291179  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.291296  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.293643  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.294002  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.294034  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.294184  177307 provision.go:138] copyHostCerts
	I1213 00:09:01.294243  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:01.294256  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:01.294323  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:01.294441  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:01.294453  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:01.294489  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:01.294569  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:01.294578  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:01.294610  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:01.294683  177307 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.no-preload-143586 san=[192.168.50.181 192.168.50.181 localhost 127.0.0.1 minikube no-preload-143586]
	I1213 00:09:01.407742  177307 provision.go:172] copyRemoteCerts
	I1213 00:09:01.407823  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:01.407856  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.410836  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.411141  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.411170  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.411455  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.411698  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.411883  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.412056  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:01.501782  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:01.530009  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:01.555147  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1213 00:09:01.580479  177307 provision.go:86] duration metric: configureAuth took 292.598329ms
	I1213 00:09:01.580511  177307 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:01.580732  177307 config.go:182] Loaded profile config "no-preload-143586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:09:01.580835  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.583742  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.584241  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.584274  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.584581  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.584798  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.585004  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.585184  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.585429  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.585889  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.585928  177307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:01.909801  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:01.909855  177307 machine.go:91] provisioned docker machine in 880.876025ms
	I1213 00:09:01.909868  177307 start.go:300] post-start starting for "no-preload-143586" (driver="kvm2")
	I1213 00:09:01.909883  177307 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:01.909905  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:01.910311  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:01.910349  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.913247  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.913635  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.913669  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.913824  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.914044  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.914199  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.914349  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.005986  177307 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:02.011294  177307 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:02.011323  177307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:02.011403  177307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:02.011494  177307 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:02.011601  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:02.022942  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:02.044535  177307 start.go:303] post-start completed in 134.650228ms
	I1213 00:09:02.044569  177307 fix.go:56] fixHost completed within 24.639227496s
	I1213 00:09:02.044597  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.047115  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.047543  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.047573  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.047758  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.047986  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.048161  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.048340  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.048500  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:02.048803  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:02.048816  177307 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:02.161458  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426142.108795362
	
	I1213 00:09:02.161485  177307 fix.go:206] guest clock: 1702426142.108795362
	I1213 00:09:02.161496  177307 fix.go:219] Guest: 2023-12-13 00:09:02.108795362 +0000 UTC Remote: 2023-12-13 00:09:02.044573107 +0000 UTC m=+272.815740988 (delta=64.222255ms)
	I1213 00:09:02.161522  177307 fix.go:190] guest clock delta is within tolerance: 64.222255ms
	I1213 00:09:02.161529  177307 start.go:83] releasing machines lock for "no-preload-143586", held for 24.756225075s
	I1213 00:09:02.161560  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.161847  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:02.164980  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.165383  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.165406  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.165582  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166273  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166493  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166576  177307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:02.166621  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.166903  177307 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:02.166931  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.169526  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.169553  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.169895  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.169938  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.169978  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.170000  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.170183  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.170282  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.170344  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.170473  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.170480  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.170603  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.170653  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.170804  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.281372  177307 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:02.288798  177307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:02.432746  177307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:02.441453  177307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:02.441539  177307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:02.456484  177307 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:02.456512  177307 start.go:475] detecting cgroup driver to use...
	I1213 00:09:02.456578  177307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:02.473267  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:02.485137  177307 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:02.485226  177307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:02.497728  177307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:02.510592  177307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:02.657681  177307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:02.791382  177307 docker.go:219] disabling docker service ...
	I1213 00:09:02.791476  177307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:02.804977  177307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:02.817203  177307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:02.927181  177307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:03.037010  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:03.050235  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:03.068944  177307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:09:03.069048  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.078813  177307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:03.078975  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.089064  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.098790  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.109679  177307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:03.120686  177307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:03.128767  177307 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:03.128820  177307 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:03.141210  177307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:03.149602  177307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:03.254618  177307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:03.434005  177307 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:03.434097  177307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:03.440391  177307 start.go:543] Will wait 60s for crictl version
	I1213 00:09:03.440481  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:03.445570  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:03.492155  177307 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:03.492240  177307 ssh_runner.go:195] Run: crio --version
	I1213 00:09:03.549854  177307 ssh_runner.go:195] Run: crio --version
	I1213 00:09:03.605472  177307 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1213 00:09:03.606678  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:03.610326  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:03.610753  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:03.610789  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:03.611019  177307 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:03.616608  177307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:03.632258  177307 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1213 00:09:03.632317  177307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:03.672640  177307 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1213 00:09:03.672666  177307 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 00:09:03.672723  177307 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:03.672772  177307 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.672774  177307 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.672820  177307 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.673002  177307 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1213 00:09:03.673032  177307 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.673038  177307 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.673094  177307 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.674386  177307 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.674433  177307 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1213 00:09:03.674505  177307 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:03.674648  177307 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.674774  177307 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.674822  177307 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.674864  177307 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.675103  177307 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.808980  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.812271  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1213 00:09:03.827742  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.828695  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.831300  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.846041  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.850598  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.908323  177307 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1213 00:09:03.908378  177307 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.908458  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.122878  177307 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1213 00:09:04.122930  177307 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:04.122955  177307 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1213 00:09:04.123115  177307 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:04.123132  177307 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1213 00:09:04.123164  177307 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:04.122988  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123203  177307 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1213 00:09:04.123230  177307 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:04.123245  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:04.123267  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123065  177307 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1213 00:09:04.123304  177307 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:04.123311  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123338  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123201  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.135289  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:04.139046  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:04.206020  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:04.206025  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.206195  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.206291  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:04.206422  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:04.247875  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1213 00:09:04.248003  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:04.248126  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1213 00:09:04.248193  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:02.719708  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:02.719761  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:02.719779  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:02.780571  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:02.780621  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:03.281221  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:03.290375  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:03.290413  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:03.781510  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:03.788285  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:03.788314  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:04.280872  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:04.288043  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 200:
	ok
	I1213 00:09:04.299772  177122 api_server.go:141] control plane version: v1.28.4
	I1213 00:09:04.299808  177122 api_server.go:131] duration metric: took 5.445787793s to wait for apiserver health ...
	I1213 00:09:04.299821  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:09:04.299830  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:04.301759  177122 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:02.186420  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Start
	I1213 00:09:02.186584  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring networks are active...
	I1213 00:09:02.187464  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring network default is active
	I1213 00:09:02.187836  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring network mk-default-k8s-diff-port-743278 is active
	I1213 00:09:02.188238  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Getting domain xml...
	I1213 00:09:02.188979  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Creating domain...
	I1213 00:09:03.516121  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting to get IP...
	I1213 00:09:03.517461  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.518001  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.518058  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:03.517966  178294 retry.go:31] will retry after 198.440266ms: waiting for machine to come up
	I1213 00:09:03.718554  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.718808  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.718846  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:03.718804  178294 retry.go:31] will retry after 319.889216ms: waiting for machine to come up
	I1213 00:09:04.040334  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.040806  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.040956  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:04.040869  178294 retry.go:31] will retry after 465.804275ms: waiting for machine to come up
	I1213 00:09:04.508751  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.509133  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.509237  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:04.509181  178294 retry.go:31] will retry after 609.293222ms: waiting for machine to come up
	I1213 00:09:04.303497  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:04.332773  177122 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:04.373266  177122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:04.384737  177122 system_pods.go:59] 9 kube-system pods found
	I1213 00:09:04.384791  177122 system_pods.go:61] "coredns-5dd5756b68-5vm25" [83fb4b19-82a2-42eb-b4df-6fd838fb8848] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:04.384805  177122 system_pods.go:61] "coredns-5dd5756b68-6mfmr" [e9598d8f-e497-4725-8eca-7fe0e7c2c2f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:04.384820  177122 system_pods.go:61] "etcd-embed-certs-335807" [cf066481-3312-4fce-8e29-e00a0177f188] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:04.384833  177122 system_pods.go:61] "kube-apiserver-embed-certs-335807" [0a545be1-8bb8-425a-889e-5ee1293e0bdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:04.384847  177122 system_pods.go:61] "kube-controller-manager-embed-certs-335807" [fd7ec770-5008-46f9-9f41-122e56baf2e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:04.384862  177122 system_pods.go:61] "kube-proxy-k8n7r" [df8cefdc-7c91-40e6-8949-ba413fd75b28] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:04.384874  177122 system_pods.go:61] "kube-scheduler-embed-certs-335807" [d2431157-640c-49e6-a83d-37cac6be1c50] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:04.384883  177122 system_pods.go:61] "metrics-server-57f55c9bc5-fx5pd" [8aa6fc5a-5649-47b2-a7de-3cabfd1515a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:04.384899  177122 system_pods.go:61] "storage-provisioner" [02026bc0-4e03-4747-ad77-052f2911efe1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:04.384909  177122 system_pods.go:74] duration metric: took 11.614377ms to wait for pod list to return data ...
	I1213 00:09:04.384928  177122 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:04.389533  177122 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:04.389578  177122 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:04.389594  177122 node_conditions.go:105] duration metric: took 4.657548ms to run NodePressure ...
	I1213 00:09:04.389622  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:04.771105  177122 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:04.778853  177122 kubeadm.go:787] kubelet initialised
	I1213 00:09:04.778886  177122 kubeadm.go:788] duration metric: took 7.74816ms waiting for restarted kubelet to initialise ...
	I1213 00:09:04.778898  177122 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:04.795344  177122 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:04.323893  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1213 00:09:04.323901  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1213 00:09:04.324122  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.324168  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.324006  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:04.324031  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:04.324300  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:04.324336  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:04.324067  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1213 00:09:04.324096  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1213 00:09:04.324100  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:04.597566  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:07.626684  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.302476413s)
	I1213 00:09:07.626718  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1213 00:09:07.626754  177307 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:07.626784  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (3.302394961s)
	I1213 00:09:07.626821  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1213 00:09:07.626824  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.302508593s)
	I1213 00:09:07.626859  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1213 00:09:07.626833  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:07.626882  177307 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.029282623s)
	I1213 00:09:07.626755  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.302393062s)
	I1213 00:09:07.626939  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1213 00:09:07.626975  177307 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1213 00:09:07.627010  177307 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:07.627072  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:05.120691  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.121251  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.121285  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:05.121183  178294 retry.go:31] will retry after 488.195845ms: waiting for machine to come up
	I1213 00:09:05.610815  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.611226  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.611258  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:05.611167  178294 retry.go:31] will retry after 705.048097ms: waiting for machine to come up
	I1213 00:09:06.317891  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:06.318329  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:06.318353  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:06.318278  178294 retry.go:31] will retry after 788.420461ms: waiting for machine to come up
	I1213 00:09:07.108179  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:07.108736  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:07.108769  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:07.108696  178294 retry.go:31] will retry after 1.331926651s: waiting for machine to come up
	I1213 00:09:08.442609  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:08.443087  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:08.443114  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:08.443032  178294 retry.go:31] will retry after 1.180541408s: waiting for machine to come up
	I1213 00:09:09.625170  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:09.625722  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:09.625753  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:09.625653  178294 retry.go:31] will retry after 1.866699827s: waiting for machine to come up
	I1213 00:09:06.828008  177122 pod_ready.go:102] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:09.322889  177122 pod_ready.go:102] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:09.822883  177122 pod_ready.go:92] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:09.822913  177122 pod_ready.go:81] duration metric: took 5.027534973s waiting for pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.822927  177122 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.828990  177122 pod_ready.go:92] pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:09.829018  177122 pod_ready.go:81] duration metric: took 6.083345ms waiting for pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.829035  177122 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.803403  177307 ssh_runner.go:235] Completed: which crictl: (2.176302329s)
	I1213 00:09:09.803541  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:09.803468  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.176578633s)
	I1213 00:09:09.803602  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1213 00:09:09.803634  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:09.803673  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:09.851557  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 00:09:09.851690  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:12.107222  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.303514888s)
	I1213 00:09:12.107284  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1213 00:09:12.107292  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.255575693s)
	I1213 00:09:12.107308  177307 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:12.107336  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1213 00:09:12.107363  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:11.494563  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:11.495148  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:11.495182  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:11.495076  178294 retry.go:31] will retry after 2.859065831s: waiting for machine to come up
	I1213 00:09:14.356328  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:14.356778  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:14.356814  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:14.356719  178294 retry.go:31] will retry after 3.506628886s: waiting for machine to come up
	I1213 00:09:11.849447  177122 pod_ready.go:102] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:14.349299  177122 pod_ready.go:102] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:14.853963  177122 pod_ready.go:92] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:14.853989  177122 pod_ready.go:81] duration metric: took 5.024945989s waiting for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.854001  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.861663  177122 pod_ready.go:92] pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:14.861685  177122 pod_ready.go:81] duration metric: took 7.676036ms waiting for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.861697  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:16.223090  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.115697846s)
	I1213 00:09:16.223134  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1213 00:09:16.223165  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:16.223211  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:17.473407  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.25017316s)
	I1213 00:09:17.473435  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1213 00:09:17.473476  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:17.473552  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:17.864739  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:17.865213  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:17.865237  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:17.865171  178294 retry.go:31] will retry after 2.94932643s: waiting for machine to come up
	I1213 00:09:16.884215  177122 pod_ready.go:102] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:17.383872  177122 pod_ready.go:92] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.383906  177122 pod_ready.go:81] duration metric: took 2.52219538s waiting for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.383928  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k8n7r" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.389464  177122 pod_ready.go:92] pod "kube-proxy-k8n7r" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.389482  177122 pod_ready.go:81] duration metric: took 5.547172ms waiting for pod "kube-proxy-k8n7r" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.389490  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.419020  177122 pod_ready.go:92] pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.419047  177122 pod_ready.go:81] duration metric: took 29.549704ms waiting for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.419056  177122 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:19.730210  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:22.069281  176813 start.go:369] acquired machines lock for "old-k8s-version-508612" in 1m3.72259979s
	I1213 00:09:22.069359  176813 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:09:22.069367  176813 fix.go:54] fixHost starting: 
	I1213 00:09:22.069812  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:22.069851  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:22.088760  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I1213 00:09:22.089211  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:22.089766  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:09:22.089795  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:22.090197  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:22.090396  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:22.090574  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:09:22.092039  176813 fix.go:102] recreateIfNeeded on old-k8s-version-508612: state=Stopped err=<nil>
	I1213 00:09:22.092064  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	W1213 00:09:22.092241  176813 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:09:22.094310  176813 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-508612" ...
	I1213 00:09:20.817420  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.817794  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has current primary IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.817833  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Found IP for machine: 192.168.72.144
	I1213 00:09:20.817870  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Reserving static IP address...
	I1213 00:09:20.818250  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-743278", mac: "52:54:00:d1:a8:22", ip: "192.168.72.144"} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.818272  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Reserved static IP address: 192.168.72.144
	I1213 00:09:20.818286  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | skip adding static IP to network mk-default-k8s-diff-port-743278 - found existing host DHCP lease matching {name: "default-k8s-diff-port-743278", mac: "52:54:00:d1:a8:22", ip: "192.168.72.144"}
	I1213 00:09:20.818298  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Getting to WaitForSSH function...
	I1213 00:09:20.818312  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for SSH to be available...
	I1213 00:09:20.820093  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.820378  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.820409  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.820525  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Using SSH client type: external
	I1213 00:09:20.820552  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa (-rw-------)
	I1213 00:09:20.820587  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:20.820618  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | About to run SSH command:
	I1213 00:09:20.820632  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | exit 0
	I1213 00:09:20.907942  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:20.908280  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetConfigRaw
	I1213 00:09:20.909042  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:20.911222  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.911544  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.911569  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.911826  177409 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/config.json ...
	I1213 00:09:20.912048  177409 machine.go:88] provisioning docker machine ...
	I1213 00:09:20.912071  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:20.912284  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:20.912425  177409 buildroot.go:166] provisioning hostname "default-k8s-diff-port-743278"
	I1213 00:09:20.912460  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:20.912585  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:20.914727  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.915081  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.915113  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.915257  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:20.915449  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:20.915562  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:20.915671  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:20.915842  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:20.916275  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:20.916293  177409 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-743278 && echo "default-k8s-diff-port-743278" | sudo tee /etc/hostname
	I1213 00:09:21.042561  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-743278
	
	I1213 00:09:21.042606  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.045461  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.045809  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.045851  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.045957  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.046181  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.046350  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.046508  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.046685  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.047008  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.047034  177409 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-743278' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-743278/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-743278' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:21.169124  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:21.169155  177409 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:21.169175  177409 buildroot.go:174] setting up certificates
	I1213 00:09:21.169185  177409 provision.go:83] configureAuth start
	I1213 00:09:21.169194  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:21.169502  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:21.172929  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.173329  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.173361  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.173540  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.175847  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.176249  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.176277  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.176447  177409 provision.go:138] copyHostCerts
	I1213 00:09:21.176509  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:21.176525  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:21.176584  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:21.176677  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:21.176744  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:21.176775  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:21.176841  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:21.176848  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:21.176866  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:21.176922  177409 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-743278 san=[192.168.72.144 192.168.72.144 localhost 127.0.0.1 minikube default-k8s-diff-port-743278]
	I1213 00:09:21.314924  177409 provision.go:172] copyRemoteCerts
	I1213 00:09:21.315003  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:21.315032  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.318149  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.318550  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.318582  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.318787  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.319005  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.319191  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.319346  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:21.409699  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:21.438626  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1213 00:09:21.468607  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:21.495376  177409 provision.go:86] duration metric: configureAuth took 326.171872ms
	I1213 00:09:21.495403  177409 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:21.495621  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:09:21.495700  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.498778  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.499247  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.499279  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.499495  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.499710  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.499877  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.500098  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.500285  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.500728  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.500751  177409 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:21.822577  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:21.822606  177409 machine.go:91] provisioned docker machine in 910.541774ms
	I1213 00:09:21.822619  177409 start.go:300] post-start starting for "default-k8s-diff-port-743278" (driver="kvm2")
	I1213 00:09:21.822632  177409 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:21.822659  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:21.823015  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:21.823044  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.825948  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.826367  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.826403  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.826577  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.826789  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.826965  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.827146  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:21.915743  177409 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:21.920142  177409 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:21.920169  177409 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:21.920249  177409 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:21.920343  177409 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:21.920474  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:21.929896  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:21.951854  177409 start.go:303] post-start completed in 129.217251ms
	I1213 00:09:21.951880  177409 fix.go:56] fixHost completed within 19.790175647s
	I1213 00:09:21.951904  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.954710  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.955103  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.955137  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.955352  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.955533  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.955685  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.955814  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.955980  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.956485  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.956505  177409 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:22.069059  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426162.011062386
	
	I1213 00:09:22.069089  177409 fix.go:206] guest clock: 1702426162.011062386
	I1213 00:09:22.069100  177409 fix.go:219] Guest: 2023-12-13 00:09:22.011062386 +0000 UTC Remote: 2023-12-13 00:09:21.951884769 +0000 UTC m=+281.971624237 (delta=59.177617ms)
	I1213 00:09:22.069142  177409 fix.go:190] guest clock delta is within tolerance: 59.177617ms
	I1213 00:09:22.069153  177409 start.go:83] releasing machines lock for "default-k8s-diff-port-743278", held for 19.907486915s
	I1213 00:09:22.069191  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.069478  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:22.072371  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.072761  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.072794  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.072922  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073441  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073605  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073670  177409 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:22.073719  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:22.073821  177409 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:22.073841  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:22.076233  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076550  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076703  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.076733  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076874  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:22.077050  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.077080  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.077052  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:22.077227  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:22.077303  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:22.077540  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:22.077630  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:22.077714  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:22.077851  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:22.188131  177409 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:22.193896  177409 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:22.339227  177409 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:22.346292  177409 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:22.346366  177409 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:22.361333  177409 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:22.361364  177409 start.go:475] detecting cgroup driver to use...
	I1213 00:09:22.361438  177409 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:22.374698  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:22.387838  177409 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:22.387897  177409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:22.402969  177409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:22.417038  177409 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:22.533130  177409 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:22.665617  177409 docker.go:219] disabling docker service ...
	I1213 00:09:22.665690  177409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:22.681327  177409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:22.692842  177409 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:22.816253  177409 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:22.951988  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:22.967607  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:22.985092  177409 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:09:22.985158  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:22.994350  177409 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:22.994403  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.003372  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.012176  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.021215  177409 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:23.031105  177409 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:23.039486  177409 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:23.039552  177409 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:23.053085  177409 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:23.062148  177409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:23.182275  177409 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:23.357901  177409 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:23.357991  177409 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:23.364148  177409 start.go:543] Will wait 60s for crictl version
	I1213 00:09:23.364225  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:09:23.368731  177409 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:23.408194  177409 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:23.408288  177409 ssh_runner.go:195] Run: crio --version
	I1213 00:09:23.461483  177409 ssh_runner.go:195] Run: crio --version
	I1213 00:09:23.513553  177409 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1213 00:09:20.148999  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.675412499s)
	I1213 00:09:20.149037  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1213 00:09:20.149073  177307 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:20.149116  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:21.101559  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 00:09:21.101608  177307 cache_images.go:123] Successfully loaded all cached images
	I1213 00:09:21.101615  177307 cache_images.go:92] LoadImages completed in 17.428934706s
	I1213 00:09:21.101694  177307 ssh_runner.go:195] Run: crio config
	I1213 00:09:21.159955  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:09:21.159978  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:21.159999  177307 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:21.160023  177307 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.181 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-143586 NodeName:no-preload-143586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:09:21.160198  177307 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-143586"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:21.160303  177307 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-143586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-143586 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:09:21.160378  177307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1213 00:09:21.170615  177307 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:21.170701  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:21.180228  177307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1213 00:09:21.198579  177307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1213 00:09:21.215096  177307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1213 00:09:21.233288  177307 ssh_runner.go:195] Run: grep 192.168.50.181	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:21.236666  177307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:21.248811  177307 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586 for IP: 192.168.50.181
	I1213 00:09:21.248847  177307 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:21.249007  177307 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:21.249058  177307 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:21.249154  177307 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.key
	I1213 00:09:21.249238  177307 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.key.8f5c2e66
	I1213 00:09:21.249291  177307 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.key
	I1213 00:09:21.249427  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:21.249468  177307 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:21.249484  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:21.249523  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:21.249559  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:21.249591  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:21.249642  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:21.250517  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:21.276697  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:21.299356  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:21.322849  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:21.348145  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:21.370885  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:21.393257  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:21.418643  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:21.446333  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:21.476374  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:21.506662  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:21.530653  177307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:21.555129  177307 ssh_runner.go:195] Run: openssl version
	I1213 00:09:21.561174  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:21.571372  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.575988  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.576053  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.581633  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:21.590564  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:21.599910  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.604113  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.604160  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.609303  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:21.619194  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:21.628171  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.632419  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.632494  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.638310  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:21.648369  177307 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:21.653143  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:21.659543  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:21.665393  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:21.670855  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:21.676290  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:21.681864  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:21.688162  177307 kubeadm.go:404] StartCluster: {Name:no-preload-143586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-143586 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:21.688243  177307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:21.688280  177307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:21.727451  177307 cri.go:89] found id: ""
	I1213 00:09:21.727536  177307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:21.739044  177307 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:21.739066  177307 kubeadm.go:636] restartCluster start
	I1213 00:09:21.739124  177307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:21.747328  177307 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:21.748532  177307 kubeconfig.go:92] found "no-preload-143586" server: "https://192.168.50.181:8443"
	I1213 00:09:21.751029  177307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:21.759501  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:21.759546  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:21.771029  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:21.771048  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:21.771095  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:21.782184  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:22.282507  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:22.282588  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:22.294105  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:22.783207  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:22.783266  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:22.796776  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.282325  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:23.282395  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:23.295974  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.782516  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:23.782615  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:23.797912  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.514911  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:23.517973  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:23.518335  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:23.518366  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:23.518566  177409 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:23.523522  177409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:23.537195  177409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:09:23.537261  177409 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:23.579653  177409 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1213 00:09:23.579729  177409 ssh_runner.go:195] Run: which lz4
	I1213 00:09:23.583956  177409 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:09:23.588686  177409 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:09:23.588720  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1213 00:09:22.095647  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Start
	I1213 00:09:22.095821  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring networks are active...
	I1213 00:09:22.096548  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring network default is active
	I1213 00:09:22.096936  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring network mk-old-k8s-version-508612 is active
	I1213 00:09:22.097366  176813 main.go:141] libmachine: (old-k8s-version-508612) Getting domain xml...
	I1213 00:09:22.097939  176813 main.go:141] libmachine: (old-k8s-version-508612) Creating domain...
	I1213 00:09:23.423128  176813 main.go:141] libmachine: (old-k8s-version-508612) Waiting to get IP...
	I1213 00:09:23.424090  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:23.424606  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:23.424676  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:23.424588  178471 retry.go:31] will retry after 260.416347ms: waiting for machine to come up
	I1213 00:09:23.687268  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:23.687867  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:23.687902  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:23.687814  178471 retry.go:31] will retry after 377.709663ms: waiting for machine to come up
	I1213 00:09:24.067588  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.068249  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.068277  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.068177  178471 retry.go:31] will retry after 480.876363ms: waiting for machine to come up
	I1213 00:09:24.550715  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.551244  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.551278  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.551191  178471 retry.go:31] will retry after 389.885819ms: waiting for machine to come up
	I1213 00:09:24.942898  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.943495  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.943526  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.943443  178471 retry.go:31] will retry after 532.578432ms: waiting for machine to come up
	I1213 00:09:25.478278  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:25.478810  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:25.478845  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:25.478781  178471 retry.go:31] will retry after 599.649827ms: waiting for machine to come up
	I1213 00:09:22.230086  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:24.729105  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:24.282598  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:24.282708  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:24.298151  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:24.782530  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:24.782639  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:24.798661  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.283235  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:25.283393  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:25.297662  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.783319  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:25.783436  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:25.797129  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.282666  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:26.282789  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:26.295674  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.783065  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:26.783147  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:26.794192  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:27.282703  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:27.282775  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:27.294823  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:27.782891  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:27.782975  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:27.798440  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:28.282826  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:28.282909  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:28.293752  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:28.782264  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:28.782325  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:28.793986  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.524765  177409 crio.go:444] Took 1.940853 seconds to copy over tarball
	I1213 00:09:25.524843  177409 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:09:28.426493  177409 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.901618536s)
	I1213 00:09:28.426522  177409 crio.go:451] Took 2.901730 seconds to extract the tarball
	I1213 00:09:28.426533  177409 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:09:28.467170  177409 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:28.520539  177409 crio.go:496] all images are preloaded for cri-o runtime.
	I1213 00:09:28.520567  177409 cache_images.go:84] Images are preloaded, skipping loading
	I1213 00:09:28.520654  177409 ssh_runner.go:195] Run: crio config
	I1213 00:09:28.588320  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:09:28.588348  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:28.588370  177409 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:28.588395  177409 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-743278 NodeName:default-k8s-diff-port-743278 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:09:28.588593  177409 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-743278"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:28.588687  177409 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-743278 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1213 00:09:28.588755  177409 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1213 00:09:28.597912  177409 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:28.597987  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:28.608324  177409 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1213 00:09:28.627102  177409 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:09:28.646837  177409 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1213 00:09:28.664534  177409 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:28.668580  177409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:28.680736  177409 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278 for IP: 192.168.72.144
	I1213 00:09:28.680777  177409 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:28.680971  177409 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:28.681037  177409 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:28.681140  177409 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.key
	I1213 00:09:28.681234  177409 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.key.1dd7f3f2
	I1213 00:09:28.681301  177409 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.key
	I1213 00:09:28.681480  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:28.681525  177409 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:28.681543  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:28.681587  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:28.681636  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:28.681681  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:28.681743  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:28.682710  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:28.707852  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:28.732792  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:28.755545  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:28.779880  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:28.805502  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:28.829894  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:28.853211  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:28.877291  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:28.899870  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:28.922141  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:28.945634  177409 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:28.962737  177409 ssh_runner.go:195] Run: openssl version
	I1213 00:09:28.968869  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:28.980535  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.985219  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.985284  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.990911  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:29.001595  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:29.012408  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.017644  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.017760  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.023914  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:29.034793  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:29.045825  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.050538  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.050584  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.057322  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:29.067993  177409 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:29.072782  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:29.078806  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:29.084744  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:29.090539  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:29.096734  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:29.102729  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:29.108909  177409 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:29.109022  177409 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:29.109095  177409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:29.158003  177409 cri.go:89] found id: ""
	I1213 00:09:29.158100  177409 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:29.169464  177409 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:29.169500  177409 kubeadm.go:636] restartCluster start
	I1213 00:09:29.169555  177409 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:29.180347  177409 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.181609  177409 kubeconfig.go:92] found "default-k8s-diff-port-743278" server: "https://192.168.72.144:8444"
	I1213 00:09:29.184377  177409 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:29.193593  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.193645  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.205447  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.205465  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.205519  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.221169  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.721729  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.721835  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.735942  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.080407  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:26.081034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:26.081061  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:26.080973  178471 retry.go:31] will retry after 1.103545811s: waiting for machine to come up
	I1213 00:09:27.186673  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:27.187208  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:27.187241  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:27.187152  178471 retry.go:31] will retry after 977.151221ms: waiting for machine to come up
	I1213 00:09:28.165799  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:28.166219  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:28.166257  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:28.166166  178471 retry.go:31] will retry after 1.27451971s: waiting for machine to come up
	I1213 00:09:29.441683  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:29.442203  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:29.442240  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:29.442122  178471 retry.go:31] will retry after 1.620883976s: waiting for machine to come up
	I1213 00:09:26.733297  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:29.624623  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:29.282975  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.621544  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.632749  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.783112  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.783214  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.794919  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.282457  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.282528  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.293852  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.782400  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.782499  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.797736  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.282276  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.282367  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.298115  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.759957  177307 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:09:31.760001  177307 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:09:31.760013  177307 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:09:31.760078  177307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:31.799045  177307 cri.go:89] found id: ""
	I1213 00:09:31.799146  177307 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:09:31.813876  177307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:09:31.823305  177307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:09:31.823382  177307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:31.831741  177307 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:31.831767  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:31.961871  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:32.826330  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.045107  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.119065  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.187783  177307 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:09:33.187887  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:33.217142  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:33.735695  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:34.236063  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:30.221906  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.230723  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.243849  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.721380  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.721492  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.734401  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.222026  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.222150  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.235400  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.722107  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.722189  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.735415  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:32.222216  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:32.222365  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:32.238718  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:32.721270  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:32.721389  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:32.735677  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:33.222261  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:33.222329  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:33.243918  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:33.721349  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:33.721438  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:33.738138  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:34.221645  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:34.221748  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:34.238845  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:34.721320  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:34.721390  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:34.738271  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.065065  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:31.065494  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:31.065528  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:31.065436  178471 retry.go:31] will retry after 2.452686957s: waiting for machine to come up
	I1213 00:09:33.519937  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:33.520505  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:33.520537  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:33.520468  178471 retry.go:31] will retry after 2.830999713s: waiting for machine to come up
	I1213 00:09:31.729101  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:34.229143  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:34.735218  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.235570  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.736120  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.764916  177307 api_server.go:72] duration metric: took 2.577131698s to wait for apiserver process to appear ...
	I1213 00:09:35.764942  177307 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:09:35.764971  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.765820  177307 api_server.go:269] stopped: https://192.168.50.181:8443/healthz: Get "https://192.168.50.181:8443/healthz": dial tcp 192.168.50.181:8443: connect: connection refused
	I1213 00:09:35.765860  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.766257  177307 api_server.go:269] stopped: https://192.168.50.181:8443/healthz: Get "https://192.168.50.181:8443/healthz": dial tcp 192.168.50.181:8443: connect: connection refused
	I1213 00:09:36.266842  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.221935  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:35.222069  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:35.240609  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:35.721801  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:35.721965  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:35.765295  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:36.221944  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:36.222021  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:36.238211  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:36.721750  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:36.721830  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:36.736765  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:37.221936  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:37.222185  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:37.238002  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:37.721304  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:37.721385  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:37.734166  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:38.221603  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:38.221701  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:38.235174  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:38.721704  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:38.721794  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:38.735963  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:39.193664  177409 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:09:39.193713  177409 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:09:39.193727  177409 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:09:39.193787  177409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:39.238262  177409 cri.go:89] found id: ""
	I1213 00:09:39.238336  177409 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:09:39.258625  177409 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:09:39.271127  177409 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:09:39.271196  177409 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:39.280870  177409 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:39.280906  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:39.399746  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:36.353967  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:36.354453  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:36.354481  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:36.354415  178471 retry.go:31] will retry after 2.983154328s: waiting for machine to come up
	I1213 00:09:39.341034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:39.341497  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:39.341526  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:39.341462  178471 retry.go:31] will retry after 3.436025657s: waiting for machine to come up
	I1213 00:09:36.230811  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:38.729730  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:40.732654  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:39.693843  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:39.693877  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:39.693896  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:39.767118  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:39.767153  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:39.767169  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:39.787684  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:39.787725  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:40.267069  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:40.272416  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:40.272464  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:40.766651  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:40.799906  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:40.799942  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:41.266411  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:41.271259  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 200:
	ok
	I1213 00:09:41.278691  177307 api_server.go:141] control plane version: v1.29.0-rc.2
	I1213 00:09:41.278715  177307 api_server.go:131] duration metric: took 5.51376527s to wait for apiserver health ...
	I1213 00:09:41.278725  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:09:41.278732  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:41.280473  177307 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:41.281924  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:41.308598  177307 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:41.330367  177307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:41.342017  177307 system_pods.go:59] 8 kube-system pods found
	I1213 00:09:41.342048  177307 system_pods.go:61] "coredns-76f75df574-87nc6" [829c7a44-85a0-4ed0-b98a-b5016aa04b97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:41.342054  177307 system_pods.go:61] "etcd-no-preload-143586" [b50e57af-530a-4689-bf42-a9f74fa6bea1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:41.342065  177307 system_pods.go:61] "kube-apiserver-no-preload-143586" [3aed4b84-e029-433a-8394-f99608b52edd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:41.342071  177307 system_pods.go:61] "kube-controller-manager-no-preload-143586" [f88e182a-0a81-4c7b-b2b3-d6911baf340f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:41.342080  177307 system_pods.go:61] "kube-proxy-8k9x6" [a71d2257-2012-4d0d-948d-d69c0c04bd2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:41.342086  177307 system_pods.go:61] "kube-scheduler-no-preload-143586" [dfb7b176-fbf8-4542-890f-1eba0f699b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:41.342098  177307 system_pods.go:61] "metrics-server-57f55c9bc5-px5lm" [25b8b500-0ad0-4da3-8f7f-d8c46a848e8a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:41.342106  177307 system_pods.go:61] "storage-provisioner" [bb18a95a-ed99-43f7-bc6f-333e0b86cacc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:41.342114  177307 system_pods.go:74] duration metric: took 11.726461ms to wait for pod list to return data ...
	I1213 00:09:41.342132  177307 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:41.345985  177307 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:41.346011  177307 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:41.346021  177307 node_conditions.go:105] duration metric: took 3.884209ms to run NodePressure ...
	I1213 00:09:41.346038  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:41.682789  177307 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:41.690867  177307 kubeadm.go:787] kubelet initialised
	I1213 00:09:41.690892  177307 kubeadm.go:788] duration metric: took 8.076203ms waiting for restarted kubelet to initialise ...
	I1213 00:09:41.690902  177307 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:41.698622  177307 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-87nc6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:43.720619  177307 pod_ready.go:102] pod "coredns-76f75df574-87nc6" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:40.471390  177409 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.071602244s)
	I1213 00:09:40.471425  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.665738  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.786290  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.859198  177409 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:09:40.859302  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:40.887488  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:41.406398  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:41.906653  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:42.405784  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:42.906462  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:43.406489  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:43.432933  177409 api_server.go:72] duration metric: took 2.573735322s to wait for apiserver process to appear ...
	I1213 00:09:43.432975  177409 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:09:43.432997  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:43.433588  177409 api_server.go:269] stopped: https://192.168.72.144:8444/healthz: Get "https://192.168.72.144:8444/healthz": dial tcp 192.168.72.144:8444: connect: connection refused
	I1213 00:09:43.433641  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:43.434089  177409 api_server.go:269] stopped: https://192.168.72.144:8444/healthz: Get "https://192.168.72.144:8444/healthz": dial tcp 192.168.72.144:8444: connect: connection refused
	I1213 00:09:43.934469  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:42.779498  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.779971  176813 main.go:141] libmachine: (old-k8s-version-508612) Found IP for machine: 192.168.39.70
	I1213 00:09:42.779993  176813 main.go:141] libmachine: (old-k8s-version-508612) Reserving static IP address...
	I1213 00:09:42.780011  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has current primary IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.780466  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "old-k8s-version-508612", mac: "52:54:00:dd:da:91", ip: "192.168.39.70"} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.780504  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | skip adding static IP to network mk-old-k8s-version-508612 - found existing host DHCP lease matching {name: "old-k8s-version-508612", mac: "52:54:00:dd:da:91", ip: "192.168.39.70"}
	I1213 00:09:42.780524  176813 main.go:141] libmachine: (old-k8s-version-508612) Reserved static IP address: 192.168.39.70
	I1213 00:09:42.780547  176813 main.go:141] libmachine: (old-k8s-version-508612) Waiting for SSH to be available...
	I1213 00:09:42.780559  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Getting to WaitForSSH function...
	I1213 00:09:42.783019  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.783434  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.783482  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.783566  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Using SSH client type: external
	I1213 00:09:42.783598  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa (-rw-------)
	I1213 00:09:42.783638  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:42.783661  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | About to run SSH command:
	I1213 00:09:42.783681  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | exit 0
	I1213 00:09:42.885148  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:42.885690  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetConfigRaw
	I1213 00:09:42.886388  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:42.889440  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.889898  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.889937  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.890209  176813 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/config.json ...
	I1213 00:09:42.890423  176813 machine.go:88] provisioning docker machine ...
	I1213 00:09:42.890444  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:42.890685  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:42.890874  176813 buildroot.go:166] provisioning hostname "old-k8s-version-508612"
	I1213 00:09:42.890899  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:42.891039  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:42.893678  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.894021  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.894051  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.894174  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:42.894391  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:42.894556  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:42.894720  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:42.894909  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:42.895383  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:42.895406  176813 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-508612 && echo "old-k8s-version-508612" | sudo tee /etc/hostname
	I1213 00:09:43.045290  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-508612
	
	I1213 00:09:43.045345  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.048936  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.049438  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.049476  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.049662  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.049877  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.050074  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.050231  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.050413  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:43.050888  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:43.050919  176813 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-508612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-508612/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-508612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:43.183021  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:43.183061  176813 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:43.183089  176813 buildroot.go:174] setting up certificates
	I1213 00:09:43.183102  176813 provision.go:83] configureAuth start
	I1213 00:09:43.183115  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:43.183467  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:43.186936  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.187409  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.187441  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.187620  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.190125  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.190560  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.190612  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.190775  176813 provision.go:138] copyHostCerts
	I1213 00:09:43.190842  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:43.190861  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:43.190936  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:43.191113  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:43.191126  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:43.191158  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:43.191245  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:43.191256  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:43.191284  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:43.191354  176813 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-508612 san=[192.168.39.70 192.168.39.70 localhost 127.0.0.1 minikube old-k8s-version-508612]
	I1213 00:09:43.321927  176813 provision.go:172] copyRemoteCerts
	I1213 00:09:43.321999  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:43.322039  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.325261  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.325653  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.325686  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.325920  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.326128  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.326300  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.326474  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:43.420656  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:43.445997  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 00:09:43.471466  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:43.500104  176813 provision.go:86] duration metric: configureAuth took 316.983913ms
	I1213 00:09:43.500137  176813 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:43.500380  176813 config.go:182] Loaded profile config "old-k8s-version-508612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1213 00:09:43.500554  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.503567  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.503994  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.504034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.504320  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.504551  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.504797  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.504978  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.505164  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:43.505640  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:43.505668  176813 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:43.859639  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:43.859723  176813 machine.go:91] provisioned docker machine in 969.28446ms
	I1213 00:09:43.859741  176813 start.go:300] post-start starting for "old-k8s-version-508612" (driver="kvm2")
	I1213 00:09:43.859754  176813 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:43.859781  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:43.860174  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:43.860207  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.863407  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.863903  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.863944  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.864142  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.864340  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.864604  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.864907  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:43.957616  176813 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:43.963381  176813 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:43.963413  176813 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:43.963489  176813 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:43.963594  176813 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:43.963710  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:43.972902  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:44.001469  176813 start.go:303] post-start completed in 141.706486ms
	I1213 00:09:44.001503  176813 fix.go:56] fixHost completed within 21.932134773s
	I1213 00:09:44.001532  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.004923  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.005334  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.005410  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.005545  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.005846  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.006067  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.006198  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.006401  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:44.006815  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:44.006841  176813 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:44.134363  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426184.079167065
	
	I1213 00:09:44.134389  176813 fix.go:206] guest clock: 1702426184.079167065
	I1213 00:09:44.134398  176813 fix.go:219] Guest: 2023-12-13 00:09:44.079167065 +0000 UTC Remote: 2023-12-13 00:09:44.001508908 +0000 UTC m=+368.244893563 (delta=77.658157ms)
	I1213 00:09:44.134434  176813 fix.go:190] guest clock delta is within tolerance: 77.658157ms
	I1213 00:09:44.134446  176813 start.go:83] releasing machines lock for "old-k8s-version-508612", held for 22.06510734s
	I1213 00:09:44.134469  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.134760  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:44.137820  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.138245  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.138275  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.138444  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.138957  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.139152  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.139229  176813 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:44.139300  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.139358  176813 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:44.139383  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.142396  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.142920  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.142981  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143041  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143197  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.143340  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.143473  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.143487  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.143505  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143633  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.143628  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:44.143786  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.143913  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.144041  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:44.235010  176813 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:44.263174  176813 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:44.424330  176813 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:44.433495  176813 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:44.433573  176813 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:44.454080  176813 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:44.454106  176813 start.go:475] detecting cgroup driver to use...
	I1213 00:09:44.454173  176813 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:44.482370  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:44.499334  176813 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:44.499429  176813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:44.516413  176813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:44.529636  176813 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:44.638215  176813 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:44.774229  176813 docker.go:219] disabling docker service ...
	I1213 00:09:44.774304  176813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:44.790414  176813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:44.804909  176813 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:44.938205  176813 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:45.069429  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:45.085783  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:45.105487  176813 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1213 00:09:45.105558  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.117662  176813 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:45.117789  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.129560  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.139168  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.148466  176813 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:45.157626  176813 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:45.166608  176813 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:45.166675  176813 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:45.179666  176813 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:45.190356  176813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:45.366019  176813 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:45.549130  176813 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:45.549209  176813 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:45.554753  176813 start.go:543] Will wait 60s for crictl version
	I1213 00:09:45.554809  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:45.559452  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:45.605106  176813 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:45.605180  176813 ssh_runner.go:195] Run: crio --version
	I1213 00:09:45.654428  176813 ssh_runner.go:195] Run: crio --version
	I1213 00:09:45.711107  176813 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1213 00:09:45.712598  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:45.716022  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:45.716371  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:45.716405  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:45.716751  176813 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:45.722339  176813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:45.739528  176813 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1213 00:09:45.739594  176813 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:45.786963  176813 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1213 00:09:45.787044  176813 ssh_runner.go:195] Run: which lz4
	I1213 00:09:45.791462  176813 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:09:45.795923  176813 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:09:45.795952  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1213 00:09:43.228658  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:45.231385  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:45.721999  177307 pod_ready.go:92] pod "coredns-76f75df574-87nc6" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:45.722026  177307 pod_ready.go:81] duration metric: took 4.023377357s waiting for pod "coredns-76f75df574-87nc6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:45.722038  177307 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:47.744891  177307 pod_ready.go:102] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:48.255190  177307 pod_ready.go:92] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:48.255220  177307 pod_ready.go:81] duration metric: took 2.533174326s waiting for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.255233  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.263450  177307 pod_ready.go:92] pod "kube-apiserver-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:48.263477  177307 pod_ready.go:81] duration metric: took 8.236475ms waiting for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.263489  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.212975  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:48.213009  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:48.213033  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.303921  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:48.303963  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:48.435167  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.442421  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:48.442455  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:48.934740  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.941126  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:48.941161  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:49.434967  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:49.444960  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:49.445016  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:49.935234  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:49.941400  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I1213 00:09:49.951057  177409 api_server.go:141] control plane version: v1.28.4
	I1213 00:09:49.951094  177409 api_server.go:131] duration metric: took 6.518109828s to wait for apiserver health ...
	I1213 00:09:49.951105  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:09:49.951115  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:49.953198  177409 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:49.954914  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:49.989291  177409 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:47.527308  176813 crio.go:444] Took 1.735860 seconds to copy over tarball
	I1213 00:09:47.527390  176813 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:09:50.641162  176813 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113740813s)
	I1213 00:09:50.641195  176813 crio.go:451] Took 3.113856 seconds to extract the tarball
	I1213 00:09:50.641208  176813 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:09:50.683194  176813 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:50.729476  176813 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1213 00:09:50.729503  176813 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 00:09:50.729574  176813 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.729602  176813 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1213 00:09:50.729611  176813 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.729617  176813 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:50.729653  176813 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.729605  176813 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.729572  176813 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:50.729589  176813 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.730849  176813 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:50.730908  176813 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.730924  176813 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1213 00:09:50.730968  176813 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.730986  176813 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.730997  176813 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.730847  176813 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.731163  176813 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:47.235674  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:49.728030  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:50.051886  177409 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:50.069774  177409 system_pods.go:59] 8 kube-system pods found
	I1213 00:09:50.069817  177409 system_pods.go:61] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:50.069834  177409 system_pods.go:61] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:50.069849  177409 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:50.069862  177409 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:50.069875  177409 system_pods.go:61] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:50.069887  177409 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:50.069907  177409 system_pods.go:61] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:50.069919  177409 system_pods.go:61] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:50.069932  177409 system_pods.go:74] duration metric: took 18.020213ms to wait for pod list to return data ...
	I1213 00:09:50.069944  177409 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:50.073659  177409 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:50.073688  177409 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:50.073701  177409 node_conditions.go:105] duration metric: took 3.752016ms to run NodePressure ...
	I1213 00:09:50.073722  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:50.545413  177409 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:50.559389  177409 kubeadm.go:787] kubelet initialised
	I1213 00:09:50.559421  177409 kubeadm.go:788] duration metric: took 13.971205ms waiting for restarted kubelet to initialise ...
	I1213 00:09:50.559442  177409 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:50.568069  177409 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.580294  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.580327  177409 pod_ready.go:81] duration metric: took 12.225698ms waiting for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.580340  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.580348  177409 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.588859  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.588893  177409 pod_ready.go:81] duration metric: took 8.526992ms waiting for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.588909  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.588917  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.609726  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.609759  177409 pod_ready.go:81] duration metric: took 20.834011ms waiting for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.609773  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.609781  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.626724  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.626757  177409 pod_ready.go:81] duration metric: took 16.966751ms waiting for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.626770  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.626777  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.950893  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-proxy-zk4wl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.950927  177409 pod_ready.go:81] duration metric: took 324.143266ms waiting for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.950939  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-proxy-zk4wl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.950948  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:51.465200  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:51.465227  177409 pod_ready.go:81] duration metric: took 514.267219ms waiting for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:51.465242  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:51.465251  177409 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:52.111655  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:52.111690  177409 pod_ready.go:81] duration metric: took 646.423162ms waiting for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:52.111707  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:52.111716  177409 pod_ready.go:38] duration metric: took 1.552263211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:52.111735  177409 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:09:52.125125  177409 ops.go:34] apiserver oom_adj: -16
	I1213 00:09:52.125152  177409 kubeadm.go:640] restartCluster took 22.955643397s
	I1213 00:09:52.125175  177409 kubeadm.go:406] StartCluster complete in 23.016262726s
	I1213 00:09:52.125204  177409 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:52.125379  177409 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:09:52.128126  177409 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:52.226763  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:09:52.226947  177409 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:09:52.227030  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:09:52.227060  177409 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227071  177409 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227082  177409 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-743278"
	I1213 00:09:52.227088  177409 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-743278"
	W1213 00:09:52.227092  177409 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:09:52.227115  177409 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227154  177409 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-743278"
	I1213 00:09:52.227165  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	W1213 00:09:52.227173  177409 addons.go:240] addon metrics-server should already be in state true
	I1213 00:09:52.227252  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	I1213 00:09:52.227633  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227667  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.227633  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227698  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.227728  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227794  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.500974  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I1213 00:09:52.501503  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.502103  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.502130  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.502518  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.503096  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.503120  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I1213 00:09:52.503173  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.503249  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I1213 00:09:52.503460  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.503653  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.503952  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.503979  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.504117  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.504137  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.504326  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.504485  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.504680  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.504910  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.504957  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.508425  177409 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-743278"
	W1213 00:09:52.508466  177409 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:09:52.508495  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	I1213 00:09:52.508941  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.508989  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.520593  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35027
	I1213 00:09:52.521055  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37697
	I1213 00:09:52.521104  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.521443  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.521602  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.521630  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.521891  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.521917  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.521956  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.522162  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.522300  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.522506  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.523942  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:52.524208  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I1213 00:09:52.524419  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:52.612780  177409 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:09:52.524612  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.755661  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:09:52.941509  177409 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:52.941551  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:09:53.149407  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:52.881597  177409 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-743278" context rescaled to 1 replicas
	I1213 00:09:53.149472  177409 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:09:53.149496  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:09:52.884700  177409 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1213 00:09:52.756216  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:53.149523  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:53.149532  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:53.149484  177409 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:09:53.150147  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:53.153109  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.153288  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.360880  177409 out.go:177] * Verifying Kubernetes components...
	I1213 00:09:53.153717  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.153952  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.361036  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:50.301405  177307 pod_ready.go:102] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:52.803001  177307 pod_ready.go:102] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:53.361074  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:53.466451  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.361087  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.361322  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.466546  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:09:53.361364  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.361590  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:53.466661  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:53.466906  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.466963  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.467166  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.467266  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.489618  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41785
	I1213 00:09:53.490349  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:53.490932  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:53.490951  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:53.491365  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:53.491579  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:53.494223  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:53.495774  177409 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:09:53.495796  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:09:53.495816  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:53.499620  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.500099  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:53.500124  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.500405  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.500592  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.500734  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.501069  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.667878  177409 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-743278" to be "Ready" ...
	I1213 00:09:53.806167  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:09:53.806194  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:09:53.807837  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:09:53.808402  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:09:53.915171  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:09:53.915199  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:09:53.993146  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:09:53.993172  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:09:54.071008  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:09:50.865405  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.866538  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.867587  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1213 00:09:50.871289  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.872472  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.878541  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.882665  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:50.978405  176813 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1213 00:09:50.978458  176813 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.978527  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.038778  176813 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1213 00:09:51.038824  176813 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:51.038877  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.048868  176813 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1213 00:09:51.048925  176813 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1213 00:09:51.048983  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.054956  176813 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1213 00:09:51.055003  176813 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:51.055045  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.055045  176813 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1213 00:09:51.055133  176813 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:51.055162  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.069915  176813 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1213 00:09:51.069971  176813 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:51.070018  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.073904  176813 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1213 00:09:51.073955  176813 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:51.073990  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:51.074058  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:51.073997  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.074127  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1213 00:09:51.074173  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:51.074270  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1213 00:09:51.076866  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:51.216889  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:51.217032  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1213 00:09:51.217046  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1213 00:09:51.217118  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1213 00:09:51.217157  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1213 00:09:51.217213  176813 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.217804  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1213 00:09:51.217887  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1213 00:09:51.224310  176813 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1213 00:09:51.224329  176813 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.224373  176813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.270398  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1213 00:09:51.651719  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:53.599238  176813 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.374835203s)
	I1213 00:09:53.599269  176813 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1213 00:09:53.599323  176813 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.947557973s)
	I1213 00:09:53.599398  176813 cache_images.go:92] LoadImages completed in 2.869881827s
	W1213 00:09:53.599497  176813 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1213 00:09:53.599587  176813 ssh_runner.go:195] Run: crio config
	I1213 00:09:53.669735  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:09:53.669767  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:53.669792  176813 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:53.669814  176813 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-508612 NodeName:old-k8s-version-508612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1213 00:09:53.669991  176813 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-508612"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-508612
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:53.670076  176813 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-508612 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-508612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:09:53.670138  176813 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1213 00:09:53.680033  176813 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:53.680120  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:53.689595  176813 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1213 00:09:53.707167  176813 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:09:53.726978  176813 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1213 00:09:53.746191  176813 ssh_runner.go:195] Run: grep 192.168.39.70	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:53.750290  176813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:53.763369  176813 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612 for IP: 192.168.39.70
	I1213 00:09:53.763407  176813 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:53.763598  176813 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:53.763671  176813 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:53.763776  176813 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.key
	I1213 00:09:53.763855  176813 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.key.5467de6f
	I1213 00:09:53.763914  176813 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.key
	I1213 00:09:53.764055  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:53.764098  176813 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:53.764115  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:53.764158  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:53.764195  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:53.764238  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:53.764297  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:53.765315  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:53.793100  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:53.821187  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:53.847791  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:53.873741  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:53.903484  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:53.930420  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:53.958706  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:53.986236  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:54.011105  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:54.034546  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:54.070680  176813 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:54.093063  176813 ssh_runner.go:195] Run: openssl version
	I1213 00:09:54.100686  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:54.114647  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.121380  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.121463  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.128895  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:54.142335  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:54.155146  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.159746  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.159817  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.166153  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:54.176190  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:54.187049  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.191667  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.191737  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.197335  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:54.208790  176813 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:54.213230  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:54.219377  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:54.225539  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:54.232970  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:54.240720  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:54.247054  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:54.253486  176813 kubeadm.go:404] StartCluster: {Name:old-k8s-version-508612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-508612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:54.253600  176813 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:54.253674  176813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:54.303024  176813 cri.go:89] found id: ""
	I1213 00:09:54.303102  176813 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:54.317795  176813 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:54.317827  176813 kubeadm.go:636] restartCluster start
	I1213 00:09:54.317884  176813 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:54.331180  176813 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.332572  176813 kubeconfig.go:92] found "old-k8s-version-508612" server: "https://192.168.39.70:8443"
	I1213 00:09:54.335079  176813 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:54.346247  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.346292  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.362692  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.362720  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.362776  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.377570  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.878307  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.878384  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.891159  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:55.377679  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:55.377789  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:55.392860  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:52.229764  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:54.232636  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:55.162034  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.354143542s)
	I1213 00:09:55.162091  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.162103  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.162486  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.162503  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.162519  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.162536  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.162554  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.162887  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.162916  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.162961  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.255531  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.255561  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.255844  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.255867  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.686976  177409 node_ready.go:58] node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:55.814831  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.006392676s)
	I1213 00:09:55.814885  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.814905  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.815237  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.815300  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.815315  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.815325  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.815675  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.815693  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.815721  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.959447  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.88836869s)
	I1213 00:09:55.959502  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.959519  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.959909  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.959931  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.959941  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.959943  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.959950  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.960189  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.960205  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.960223  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.960226  177409 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-743278"
	I1213 00:09:55.962464  177409 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1213 00:09:54.302018  177307 pod_ready.go:92] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.302047  177307 pod_ready.go:81] duration metric: took 6.038549186s waiting for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.302061  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8k9x6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.308192  177307 pod_ready.go:92] pod "kube-proxy-8k9x6" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.308220  177307 pod_ready.go:81] duration metric: took 6.150452ms waiting for pod "kube-proxy-8k9x6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.308233  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.829614  177307 pod_ready.go:92] pod "kube-scheduler-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.829639  177307 pod_ready.go:81] duration metric: took 521.398817ms waiting for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.829649  177307 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:56.842731  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:55.964691  177409 addons.go:502] enable addons completed in 3.737755135s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1213 00:09:58.183398  177409 node_ready.go:58] node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:58.683603  177409 node_ready.go:49] node "default-k8s-diff-port-743278" has status "Ready":"True"
	I1213 00:09:58.683629  177409 node_ready.go:38] duration metric: took 5.01572337s waiting for node "default-k8s-diff-port-743278" to be "Ready" ...
	I1213 00:09:58.683638  177409 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:58.692636  177409 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:58.699084  177409 pod_ready.go:92] pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:58.699103  177409 pod_ready.go:81] duration metric: took 6.437856ms waiting for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:58.699111  177409 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:55.877904  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:55.877977  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:55.893729  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.377737  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:56.377803  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:56.389754  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.878464  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:56.878530  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:56.891849  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:57.377841  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:57.377929  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:57.389962  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:57.878384  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:57.878464  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:57.892518  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:58.378033  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:58.378119  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:58.391780  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:58.878309  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:58.878397  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:58.890677  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:59.378117  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:59.378239  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:59.390695  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:59.878240  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:59.878318  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:59.889688  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:00.378278  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:00.378376  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:00.390756  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.727591  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:58.729633  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:59.343431  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:01.344195  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:03.842943  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:00.718294  177409 pod_ready.go:102] pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:01.216472  177409 pod_ready.go:92] pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.216499  177409 pod_ready.go:81] duration metric: took 2.517381433s waiting for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.216513  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.221993  177409 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.222016  177409 pod_ready.go:81] duration metric: took 5.495703ms waiting for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.222026  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.227513  177409 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.227543  177409 pod_ready.go:81] duration metric: took 5.506889ms waiting for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.227555  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.485096  177409 pod_ready.go:92] pod "kube-proxy-zk4wl" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.485120  177409 pod_ready.go:81] duration metric: took 257.55839ms waiting for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.485131  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.886812  177409 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.886843  177409 pod_ready.go:81] duration metric: took 401.704296ms waiting for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.886860  177409 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:04.192658  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:00.878385  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:00.878514  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:00.891279  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:01.378010  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:01.378120  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:01.389897  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:01.878496  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:01.878581  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:01.890674  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:02.377657  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:02.377767  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:02.389165  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:02.877744  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:02.877886  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:02.889536  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:03.378083  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:03.378206  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:03.390009  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:03.878637  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:03.878757  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:03.891565  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:04.347244  176813 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:10:04.347324  176813 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:10:04.347339  176813 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:10:04.347431  176813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:10:04.391480  176813 cri.go:89] found id: ""
	I1213 00:10:04.391558  176813 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:10:04.407659  176813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:10:04.416545  176813 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:10:04.416616  176813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:10:04.425366  176813 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:10:04.425393  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:04.553907  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:05.643662  176813 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.089700044s)
	I1213 00:10:05.643704  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:01.230857  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:03.728598  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.729292  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.843723  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:07.844549  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:06.193695  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:08.194425  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.881077  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:05.983444  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:06.106543  176813 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:10:06.106637  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:06.120910  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:06.637294  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.137087  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.636989  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.659899  176813 api_server.go:72] duration metric: took 1.5533541s to wait for apiserver process to appear ...
	I1213 00:10:07.659925  176813 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:10:07.659949  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:08.229410  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.729881  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.344919  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.842700  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.692378  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.693810  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.660316  176813 api_server.go:269] stopped: https://192.168.39.70:8443/healthz: Get "https://192.168.39.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 00:10:12.660365  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:13.933418  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:10:13.933452  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:10:14.434114  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:14.442223  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1213 00:10:14.442261  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1213 00:10:14.934425  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:14.941188  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1213 00:10:14.941232  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1213 00:10:15.433614  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:15.441583  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1213 00:10:15.449631  176813 api_server.go:141] control plane version: v1.16.0
	I1213 00:10:15.449656  176813 api_server.go:131] duration metric: took 7.789725712s to wait for apiserver health ...
	I1213 00:10:15.449671  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:10:15.449677  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:10:15.451328  176813 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:10:15.452690  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:10:15.463558  176813 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:10:15.482997  176813 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:10:15.493646  176813 system_pods.go:59] 7 kube-system pods found
	I1213 00:10:15.493674  176813 system_pods.go:61] "coredns-5644d7b6d9-jnhmk" [38a0c948-a47e-4566-ad47-376943787ca1] Running
	I1213 00:10:15.493679  176813 system_pods.go:61] "etcd-old-k8s-version-508612" [80e685b2-cd70-4b7d-b00c-feda3ab1a509] Running
	I1213 00:10:15.493683  176813 system_pods.go:61] "kube-apiserver-old-k8s-version-508612" [657f1d7b-4fcb-44d4-96d3-3cc659fb9f0a] Running
	I1213 00:10:15.493688  176813 system_pods.go:61] "kube-controller-manager-old-k8s-version-508612" [d84a0927-7d19-4bba-8afd-b32877a9aee3] Running
	I1213 00:10:15.493692  176813 system_pods.go:61] "kube-proxy-fpd4j" [f2e9e528-576f-4339-b208-09ee5dbe7fcb] Running
	I1213 00:10:15.493696  176813 system_pods.go:61] "kube-scheduler-old-k8s-version-508612" [ce5ff03a-23bf-4cce-8795-58e412fc841c] Running
	I1213 00:10:15.493699  176813 system_pods.go:61] "storage-provisioner" [98a03a45-0cd3-40b4-9e66-6df14db5a848] Running
	I1213 00:10:15.493706  176813 system_pods.go:74] duration metric: took 10.683423ms to wait for pod list to return data ...
	I1213 00:10:15.493715  176813 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:10:15.498679  176813 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:10:15.498726  176813 node_conditions.go:123] node cpu capacity is 2
	I1213 00:10:15.498742  176813 node_conditions.go:105] duration metric: took 5.021318ms to run NodePressure ...
	I1213 00:10:15.498767  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:15.762302  176813 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:10:15.766665  176813 retry.go:31] will retry after 288.591747ms: kubelet not initialised
	I1213 00:10:13.228878  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.728396  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.343194  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:17.344384  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.193995  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:17.693024  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:19.693723  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:16.063637  176813 retry.go:31] will retry after 250.40677ms: kubelet not initialised
	I1213 00:10:16.320362  176813 retry.go:31] will retry after 283.670967ms: kubelet not initialised
	I1213 00:10:16.610834  176813 retry.go:31] will retry after 810.845397ms: kubelet not initialised
	I1213 00:10:17.427101  176813 retry.go:31] will retry after 1.00058932s: kubelet not initialised
	I1213 00:10:18.498625  176813 retry.go:31] will retry after 2.616819597s: kubelet not initialised
	I1213 00:10:18.226990  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:20.228211  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:19.345330  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:21.843959  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:22.192449  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.193001  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:21.120283  176813 retry.go:31] will retry after 1.883694522s: kubelet not initialised
	I1213 00:10:23.009312  176813 retry.go:31] will retry after 2.899361823s: kubelet not initialised
	I1213 00:10:22.727450  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.729952  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.342673  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:26.343639  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:28.842489  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:26.696279  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:29.194453  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:25.914801  176813 retry.go:31] will retry after 8.466541404s: kubelet not initialised
	I1213 00:10:27.227947  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:29.229430  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:30.843429  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:32.844457  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:31.692122  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:33.694437  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:34.391931  176813 retry.go:31] will retry after 6.686889894s: kubelet not initialised
	I1213 00:10:31.729052  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:33.730399  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:35.344029  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:37.842694  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:36.193427  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:38.193688  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:36.226978  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:38.227307  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.227797  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.343702  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:42.841574  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.693443  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:42.693668  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:41.084957  176813 retry.go:31] will retry after 18.68453817s: kubelet not initialised
	I1213 00:10:42.229433  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:44.728322  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:44.843586  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:46.844269  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:45.192582  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:47.691806  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.692545  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:47.227469  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.228908  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.343743  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.843948  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.694308  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.192629  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.728175  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.226904  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.342077  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:56.343115  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.345031  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:56.193137  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.693873  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:59.777116  176813 kubeadm.go:787] kubelet initialised
	I1213 00:10:59.777150  176813 kubeadm.go:788] duration metric: took 44.014819539s waiting for restarted kubelet to initialise ...
	I1213 00:10:59.777162  176813 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:10:59.782802  176813 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.788307  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.788348  176813 pod_ready.go:81] duration metric: took 5.514049ms waiting for pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.788356  176813 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.792569  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.792588  176813 pod_ready.go:81] duration metric: took 4.224934ms waiting for pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.792599  176813 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.797096  176813 pod_ready.go:92] pod "etcd-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.797119  176813 pod_ready.go:81] duration metric: took 4.508662ms waiting for pod "etcd-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.797130  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.801790  176813 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.801811  176813 pod_ready.go:81] duration metric: took 4.673597ms waiting for pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.801818  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.175474  176813 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.175504  176813 pod_ready.go:81] duration metric: took 373.677737ms waiting for pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.175523  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fpd4j" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.576344  176813 pod_ready.go:92] pod "kube-proxy-fpd4j" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.576373  176813 pod_ready.go:81] duration metric: took 400.842191ms waiting for pod "kube-proxy-fpd4j" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.576387  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:56.229570  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.728770  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:00.843201  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.343182  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:01.199677  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.201427  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:00.976886  176813 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.976908  176813 pod_ready.go:81] duration metric: took 400.512629ms waiting for pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.976920  176813 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:03.283224  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.284030  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:01.229393  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.728570  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.843264  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.343228  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.694505  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.197100  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:07.285680  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:09.786591  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:06.227705  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.229577  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.727791  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.343300  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.843162  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.695161  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:13.195051  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.285865  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:14.785354  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.728656  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:15.227890  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:14.844312  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:16.847144  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:15.692597  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:18.193383  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:17.284986  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.786139  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:17.229608  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.728503  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.344056  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:21.843070  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:23.844051  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:20.692417  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.692912  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.693204  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.285292  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.784342  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.227286  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.228831  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.342758  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:28.347392  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.693376  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:28.696971  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:27.284643  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:29.284776  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.727796  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:29.227690  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:30.843482  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:32.844695  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.191962  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.192585  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.285494  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.285863  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.791234  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.727767  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.728047  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.342092  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:37.342356  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.196354  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:37.693679  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:38.285349  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.785094  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:36.228379  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:38.728361  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.728752  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:39.342944  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:41.343229  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:43.842669  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.192636  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:42.696348  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:43.284960  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.783972  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:42.730357  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.228371  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.844034  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:48.345622  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.199304  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.692399  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.692916  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.784062  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.784533  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.232607  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.727709  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:50.842207  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:52.845393  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:52.193829  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:54.694220  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:51.784671  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:54.284709  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:51.728053  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:53.729081  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:55.342783  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:57.343274  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.694508  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:59.194904  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.285342  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:58.783460  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.227395  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:58.231694  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:00.727822  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:59.343618  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.842326  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.842653  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.197290  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.694223  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.285393  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.784968  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.786110  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:02.728596  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.227456  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.843038  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.342838  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.695124  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.192630  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.284460  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.284768  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:07.728787  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.227036  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.344532  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.841921  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.193483  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.196550  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.693706  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.784036  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.784471  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.227952  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.228178  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.842965  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:17.343683  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:17.193131  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.692561  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:16.785596  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.285058  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:16.726702  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:18.728269  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.843031  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:22.343417  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:22.191869  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:24.193973  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:21.783890  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:23.784341  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:25.784521  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:21.227269  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:23.227691  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:25.228239  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:24.343805  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:26.346354  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:28.844254  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:26.693293  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:29.193583  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:27.784904  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:30.285014  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:27.727045  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:29.728691  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:31.346007  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:33.843421  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:31.194160  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:33.691639  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:32.784701  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:35.284958  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:32.226511  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:34.228892  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:36.342384  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.343546  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:35.694257  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.191620  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:37.286143  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:39.783802  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:36.727306  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.728168  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:40.850557  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.342393  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:40.192328  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:42.192749  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:44.693406  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:41.784411  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.789293  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:41.228591  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.728133  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:45.842401  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:47.843839  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:47.193847  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:49.692840  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:46.284387  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:48.284692  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:50.285419  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:46.228594  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:48.728575  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:50.343073  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:52.843034  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:51.692895  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.196344  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:52.785093  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.785238  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:51.226704  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:53.228359  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:55.228418  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.847060  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.345339  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:56.693854  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.191098  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.285101  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.783955  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.727063  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.727437  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.847179  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:02.343433  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.192388  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.693056  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.784055  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.784840  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.727635  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.727705  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:04.346684  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.843294  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.192928  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.693240  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.284092  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.784303  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:10.784971  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.228019  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.727726  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:09.342622  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:11.343211  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.843894  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:10.698298  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.191387  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.285854  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:15.790625  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:11.228300  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.730143  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:16.343574  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:18.343896  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:15.195797  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:17.694620  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:18.283712  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:20.284937  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:16.227280  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:17.419163  177122 pod_ready.go:81] duration metric: took 4m0.000090271s waiting for pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace to be "Ready" ...
	E1213 00:13:17.419207  177122 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:13:17.419233  177122 pod_ready.go:38] duration metric: took 4m12.64031929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:17.419260  177122 kubeadm.go:640] restartCluster took 4m32.91279931s
	W1213 00:13:17.419346  177122 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:13:17.419387  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:13:20.847802  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:23.342501  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:20.193039  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:22.693730  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:22.285212  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:24.783901  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:25.343029  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:27.842840  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:25.194640  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:27.692515  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:29.695543  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:26.785503  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:29.284618  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:33.603614  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.184189808s)
	I1213 00:13:33.603692  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:13:33.617573  177122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:13:33.626779  177122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:13:33.636160  177122 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:13:33.636214  177122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 00:13:33.694141  177122 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1213 00:13:33.694267  177122 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:13:33.853582  177122 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:13:33.853718  177122 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:13:33.853992  177122 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:13:34.092007  177122 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:13:29.844324  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:32.345926  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.093975  177122 out.go:204]   - Generating certificates and keys ...
	I1213 00:13:34.094125  177122 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:13:34.094198  177122 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:13:34.094297  177122 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:13:34.094492  177122 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:13:34.095287  177122 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:13:34.096041  177122 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:13:34.096841  177122 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:13:34.097551  177122 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:13:34.098399  177122 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:13:34.099122  177122 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:13:34.099844  177122 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:13:34.099929  177122 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:13:34.191305  177122 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:13:34.425778  177122 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:13:34.601958  177122 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:13:34.747536  177122 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:13:34.748230  177122 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:13:34.750840  177122 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:13:32.193239  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.691928  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:31.286291  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:33.786852  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.752409  177122 out.go:204]   - Booting up control plane ...
	I1213 00:13:34.752562  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:13:34.752659  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:13:34.752994  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:13:34.772157  177122 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:13:34.774789  177122 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:13:34.774854  177122 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1213 00:13:34.926546  177122 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:13:34.346782  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.847723  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.694243  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:39.195903  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.284979  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:38.285685  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:40.286174  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:39.345989  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:41.353093  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:43.847024  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:43.435528  177122 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506764 seconds
	I1213 00:13:43.435691  177122 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:13:43.454840  177122 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:13:43.997250  177122 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:13:43.997537  177122 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-335807 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 00:13:44.513097  177122 kubeadm.go:322] [bootstrap-token] Using token: a9yhsz.n5p4z1j5jkbj68ov
	I1213 00:13:44.514695  177122 out.go:204]   - Configuring RBAC rules ...
	I1213 00:13:44.514836  177122 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:13:44.520134  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 00:13:44.528726  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:13:44.535029  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:13:44.539162  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:13:44.545990  177122 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:13:44.561964  177122 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 00:13:44.831402  177122 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:13:44.927500  177122 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:13:44.931294  177122 kubeadm.go:322] 
	I1213 00:13:44.931371  177122 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:13:44.931389  177122 kubeadm.go:322] 
	I1213 00:13:44.931500  177122 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:13:44.931509  177122 kubeadm.go:322] 
	I1213 00:13:44.931535  177122 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:13:44.931605  177122 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:13:44.931674  177122 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:13:44.931681  177122 kubeadm.go:322] 
	I1213 00:13:44.931743  177122 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1213 00:13:44.931752  177122 kubeadm.go:322] 
	I1213 00:13:44.931838  177122 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 00:13:44.931861  177122 kubeadm.go:322] 
	I1213 00:13:44.931938  177122 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:13:44.932026  177122 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:13:44.932139  177122 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:13:44.932151  177122 kubeadm.go:322] 
	I1213 00:13:44.932260  177122 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 00:13:44.932367  177122 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:13:44.932386  177122 kubeadm.go:322] 
	I1213 00:13:44.932533  177122 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token a9yhsz.n5p4z1j5jkbj68ov \
	I1213 00:13:44.932702  177122 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:13:44.932726  177122 kubeadm.go:322] 	--control-plane 
	I1213 00:13:44.932730  177122 kubeadm.go:322] 
	I1213 00:13:44.932797  177122 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:13:44.932808  177122 kubeadm.go:322] 
	I1213 00:13:44.932927  177122 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token a9yhsz.n5p4z1j5jkbj68ov \
	I1213 00:13:44.933074  177122 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:13:44.933953  177122 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:13:44.934004  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:13:44.934026  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:13:44.935893  177122 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:13:41.694337  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.192303  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:42.783865  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.784599  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.937355  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:13:44.961248  177122 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:13:45.005684  177122 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:13:45.005758  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.005789  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=embed-certs-335807 minikube.k8s.io/updated_at=2023_12_13T00_13_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.117205  177122 ops.go:34] apiserver oom_adj: -16
	I1213 00:13:45.402961  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.532503  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:46.343927  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:48.843509  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.197988  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:48.691611  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.785080  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:49.283316  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.138647  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:46.639104  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:47.139139  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:47.638244  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:48.138634  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:48.638352  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:49.138616  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:49.639061  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:50.138633  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:50.639013  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:51.343525  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:53.345044  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:50.693254  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:52.693448  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:51.286352  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:53.782966  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:55.786792  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:51.138430  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:51.638340  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:52.138696  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:52.638727  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:53.138509  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:53.639092  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:54.138153  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:54.638781  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:55.138875  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:55.639166  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:56.138534  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:56.638726  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:57.138427  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:57.273101  177122 kubeadm.go:1088] duration metric: took 12.26741009s to wait for elevateKubeSystemPrivileges.
	I1213 00:13:57.273139  177122 kubeadm.go:406] StartCluster complete in 5m12.825293837s
	I1213 00:13:57.273163  177122 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:13:57.273294  177122 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:13:57.275845  177122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:13:57.276142  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:13:57.276488  177122 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:13:57.276665  177122 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:13:57.276739  177122 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-335807"
	I1213 00:13:57.276756  177122 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-335807"
	W1213 00:13:57.276765  177122 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:13:57.276812  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.277245  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277283  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.277356  177122 addons.go:69] Setting default-storageclass=true in profile "embed-certs-335807"
	I1213 00:13:57.277374  177122 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-335807"
	I1213 00:13:57.277528  177122 addons.go:69] Setting metrics-server=true in profile "embed-certs-335807"
	I1213 00:13:57.277545  177122 addons.go:231] Setting addon metrics-server=true in "embed-certs-335807"
	W1213 00:13:57.277552  177122 addons.go:240] addon metrics-server should already be in state true
	I1213 00:13:57.277599  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.277791  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277820  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.277923  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277945  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.296571  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I1213 00:13:57.299879  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I1213 00:13:57.299897  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I1213 00:13:57.300251  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.300833  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.300906  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.300923  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.300935  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.301294  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.301309  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.301330  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.301419  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.301427  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.301497  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.301728  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.301774  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.302199  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.302232  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.303181  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.303222  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.304586  177122 addons.go:231] Setting addon default-storageclass=true in "embed-certs-335807"
	W1213 00:13:57.304601  177122 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:13:57.304620  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.304860  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.304891  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.323403  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I1213 00:13:57.324103  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.324810  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I1213 00:13:57.324961  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.324985  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.325197  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.325332  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.325518  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.325910  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.325935  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.326524  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.326731  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.328013  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.329895  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.332188  177122 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:13:57.333332  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1213 00:13:57.333375  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:13:57.334952  177122 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:13:57.333392  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:13:57.333795  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.337096  177122 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:13:57.337110  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:13:57.337124  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.337162  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.337564  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.337585  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.339793  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.340514  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.340572  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.340821  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.341606  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.341657  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.341829  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.342023  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.342206  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.342411  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.347105  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.347512  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.347538  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.347782  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.347974  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.348108  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.348213  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.359690  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1213 00:13:57.360385  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.361065  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.361093  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.361567  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.361777  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.363693  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.364020  177122 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:13:57.364037  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:13:57.364056  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.367409  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.367874  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.367904  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.368086  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.368287  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.368470  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.368619  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.399353  177122 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-335807" context rescaled to 1 replicas
	I1213 00:13:57.399391  177122 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.249 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:13:57.401371  177122 out.go:177] * Verifying Kubernetes components...
	I1213 00:13:54.829811  177307 pod_ready.go:81] duration metric: took 4m0.000140793s waiting for pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace to be "Ready" ...
	E1213 00:13:54.829844  177307 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:13:54.829878  177307 pod_ready.go:38] duration metric: took 4m13.138964255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:54.829912  177307 kubeadm.go:640] restartCluster took 4m33.090839538s
	W1213 00:13:54.829977  177307 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:13:54.830014  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:13:55.192745  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:57.193249  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:59.196279  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:57.403699  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:13:57.551632  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:13:57.551656  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:13:57.590132  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:13:57.617477  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:13:57.648290  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:13:57.648324  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:13:57.724394  177122 node_ready.go:35] waiting up to 6m0s for node "embed-certs-335807" to be "Ready" ...
	I1213 00:13:57.724498  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:13:57.751666  177122 node_ready.go:49] node "embed-certs-335807" has status "Ready":"True"
	I1213 00:13:57.751704  177122 node_ready.go:38] duration metric: took 27.274531ms waiting for node "embed-certs-335807" to be "Ready" ...
	I1213 00:13:57.751718  177122 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:57.764283  177122 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace to be "Ready" ...
	I1213 00:13:57.835941  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:13:57.835968  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:13:58.040994  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:13:59.867561  177122 pod_ready.go:102] pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.210713  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.620538044s)
	I1213 00:14:00.210745  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.593229432s)
	I1213 00:14:00.210763  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210775  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210805  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210846  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210892  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.169863052s)
	I1213 00:14:00.210932  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210951  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210803  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.48627637s)
	I1213 00:14:00.211241  177122 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1213 00:14:00.211428  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.211467  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.211477  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.211486  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.211496  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.211804  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.211843  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.211851  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.211860  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.211869  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.211979  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.212025  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.212033  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.212251  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.213205  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213214  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.213221  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213253  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213269  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213287  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.213300  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.213565  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213592  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213600  177122 addons.go:467] Verifying addon metrics-server=true in "embed-certs-335807"
	I1213 00:14:00.213633  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.231892  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.231921  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.232238  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.232257  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.234089  177122 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1213 00:13:58.285584  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.286469  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.235676  177122 addons.go:502] enable addons completed in 2.959016059s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1213 00:14:01.848071  177122 pod_ready.go:92] pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.848093  177122 pod_ready.go:81] duration metric: took 4.083780035s waiting for pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.848101  177122 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.854062  177122 pod_ready.go:92] pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.854082  177122 pod_ready.go:81] duration metric: took 5.975194ms waiting for pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.854090  177122 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.864033  177122 pod_ready.go:92] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.864060  177122 pod_ready.go:81] duration metric: took 9.963384ms waiting for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.864072  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.875960  177122 pod_ready.go:92] pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.875990  177122 pod_ready.go:81] duration metric: took 11.909604ms waiting for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.876004  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.882084  177122 pod_ready.go:92] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.882107  177122 pod_ready.go:81] duration metric: took 6.092978ms waiting for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.882118  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ccq47" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:02.645363  177122 pod_ready.go:92] pod "kube-proxy-ccq47" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:02.645389  177122 pod_ready.go:81] duration metric: took 763.264171ms waiting for pod "kube-proxy-ccq47" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:02.645399  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:03.045476  177122 pod_ready.go:92] pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:03.045502  177122 pod_ready.go:81] duration metric: took 400.097321ms waiting for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:03.045513  177122 pod_ready.go:38] duration metric: took 5.293782674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:03.045530  177122 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:03.045584  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:03.062802  177122 api_server.go:72] duration metric: took 5.663381439s to wait for apiserver process to appear ...
	I1213 00:14:03.062827  177122 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:03.062848  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:14:03.068482  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 200:
	ok
	I1213 00:14:03.069909  177122 api_server.go:141] control plane version: v1.28.4
	I1213 00:14:03.069934  177122 api_server.go:131] duration metric: took 7.099309ms to wait for apiserver health ...
	I1213 00:14:03.069943  177122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:03.248993  177122 system_pods.go:59] 9 kube-system pods found
	I1213 00:14:03.249025  177122 system_pods.go:61] "coredns-5dd5756b68-gs4kb" [d4b86e83-a0a1-4bf8-958e-e154e91f47ef] Running
	I1213 00:14:03.249032  177122 system_pods.go:61] "coredns-5dd5756b68-t92hd" [1ad2dcb3-bcda-42af-b4ce-0d95bba0315f] Running
	I1213 00:14:03.249039  177122 system_pods.go:61] "etcd-embed-certs-335807" [aa5222a7-5670-4550-9d65-6db2095898be] Running
	I1213 00:14:03.249045  177122 system_pods.go:61] "kube-apiserver-embed-certs-335807" [ca0e9de9-8f6a-4bae-b1d1-04b7c0c3cd4c] Running
	I1213 00:14:03.249052  177122 system_pods.go:61] "kube-controller-manager-embed-certs-335807" [f5563afe-3d6c-4b44-b0c0-765da451fd88] Running
	I1213 00:14:03.249057  177122 system_pods.go:61] "kube-proxy-ccq47" [68f3c55f-175e-40af-a769-65c859d5012d] Running
	I1213 00:14:03.249063  177122 system_pods.go:61] "kube-scheduler-embed-certs-335807" [c989cf08-80e9-4b0f-b0e4-f840c6259ace] Running
	I1213 00:14:03.249074  177122 system_pods.go:61] "metrics-server-57f55c9bc5-z7qb4" [b33959c3-63b7-4a81-adda-6d2971036e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:03.249082  177122 system_pods.go:61] "storage-provisioner" [816660d7-a041-4695-b7da-d977b8891935] Running
	I1213 00:14:03.249095  177122 system_pods.go:74] duration metric: took 179.144496ms to wait for pod list to return data ...
	I1213 00:14:03.249106  177122 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:03.444557  177122 default_sa.go:45] found service account: "default"
	I1213 00:14:03.444591  177122 default_sa.go:55] duration metric: took 195.469108ms for default service account to be created ...
	I1213 00:14:03.444603  177122 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:03.651685  177122 system_pods.go:86] 9 kube-system pods found
	I1213 00:14:03.651714  177122 system_pods.go:89] "coredns-5dd5756b68-gs4kb" [d4b86e83-a0a1-4bf8-958e-e154e91f47ef] Running
	I1213 00:14:03.651719  177122 system_pods.go:89] "coredns-5dd5756b68-t92hd" [1ad2dcb3-bcda-42af-b4ce-0d95bba0315f] Running
	I1213 00:14:03.651723  177122 system_pods.go:89] "etcd-embed-certs-335807" [aa5222a7-5670-4550-9d65-6db2095898be] Running
	I1213 00:14:03.651727  177122 system_pods.go:89] "kube-apiserver-embed-certs-335807" [ca0e9de9-8f6a-4bae-b1d1-04b7c0c3cd4c] Running
	I1213 00:14:03.651731  177122 system_pods.go:89] "kube-controller-manager-embed-certs-335807" [f5563afe-3d6c-4b44-b0c0-765da451fd88] Running
	I1213 00:14:03.651735  177122 system_pods.go:89] "kube-proxy-ccq47" [68f3c55f-175e-40af-a769-65c859d5012d] Running
	I1213 00:14:03.651739  177122 system_pods.go:89] "kube-scheduler-embed-certs-335807" [c989cf08-80e9-4b0f-b0e4-f840c6259ace] Running
	I1213 00:14:03.651745  177122 system_pods.go:89] "metrics-server-57f55c9bc5-z7qb4" [b33959c3-63b7-4a81-adda-6d2971036e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:03.651750  177122 system_pods.go:89] "storage-provisioner" [816660d7-a041-4695-b7da-d977b8891935] Running
	I1213 00:14:03.651758  177122 system_pods.go:126] duration metric: took 207.148805ms to wait for k8s-apps to be running ...
	I1213 00:14:03.651764  177122 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:03.651814  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:03.666068  177122 system_svc.go:56] duration metric: took 14.292973ms WaitForService to wait for kubelet.
	I1213 00:14:03.666093  177122 kubeadm.go:581] duration metric: took 6.266680553s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:03.666109  177122 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:03.845399  177122 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:03.845431  177122 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:03.845447  177122 node_conditions.go:105] duration metric: took 179.332019ms to run NodePressure ...
	I1213 00:14:03.845462  177122 start.go:228] waiting for startup goroutines ...
	I1213 00:14:03.845470  177122 start.go:233] waiting for cluster config update ...
	I1213 00:14:03.845482  177122 start.go:242] writing updated cluster config ...
	I1213 00:14:03.845850  177122 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:03.898374  177122 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1213 00:14:03.900465  177122 out.go:177] * Done! kubectl is now configured to use "embed-certs-335807" cluster and "default" namespace by default
	I1213 00:14:01.693061  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:01.886947  177409 pod_ready.go:81] duration metric: took 4m0.000066225s waiting for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	E1213 00:14:01.886997  177409 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:14:01.887010  177409 pod_ready.go:38] duration metric: took 4m3.203360525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:01.887056  177409 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:01.887093  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:01.887156  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:01.956004  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:01.956029  177409 cri.go:89] found id: ""
	I1213 00:14:01.956038  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:01.956096  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:01.961314  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:01.961388  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:02.001797  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:02.001825  177409 cri.go:89] found id: ""
	I1213 00:14:02.001835  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:02.001881  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.007127  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:02.007193  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:02.050259  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:02.050283  177409 cri.go:89] found id: ""
	I1213 00:14:02.050294  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:02.050347  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.056086  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:02.056147  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:02.125159  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:02.125189  177409 cri.go:89] found id: ""
	I1213 00:14:02.125199  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:02.125261  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.129874  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:02.129939  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:02.175027  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:02.175058  177409 cri.go:89] found id: ""
	I1213 00:14:02.175067  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:02.175127  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.180444  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:02.180515  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:02.219578  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:02.219603  177409 cri.go:89] found id: ""
	I1213 00:14:02.219610  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:02.219664  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.223644  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:02.223693  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:02.260542  177409 cri.go:89] found id: ""
	I1213 00:14:02.260567  177409 logs.go:284] 0 containers: []
	W1213 00:14:02.260575  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:02.260583  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:02.260656  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:02.304058  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:02.304082  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:02.304090  177409 cri.go:89] found id: ""
	I1213 00:14:02.304100  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:02.304159  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.308606  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.312421  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:02.312473  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:02.356415  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:02.356460  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:02.405870  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:02.405902  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:02.876461  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:02.876508  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:03.037302  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:03.037334  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:03.098244  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:03.098273  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:03.163681  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:03.163712  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:03.216883  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:03.216912  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:03.267979  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:03.268011  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:03.309364  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:03.309394  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:03.352427  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:03.352479  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:03.406508  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:03.406547  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:03.449959  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:03.449985  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:02.784516  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:05.284536  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.408895  177307 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.578851358s)
	I1213 00:14:09.408954  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:09.422044  177307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:14:09.430579  177307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:14:09.438689  177307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:14:09.438727  177307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 00:14:09.493519  177307 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1213 00:14:09.493657  177307 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:14:09.648151  177307 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:14:09.648294  177307 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:14:09.648489  177307 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:14:09.908199  177307 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:14:05.974125  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:05.992335  177409 api_server.go:72] duration metric: took 4m12.842684139s to wait for apiserver process to appear ...
	I1213 00:14:05.992364  177409 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:05.992411  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:05.992491  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:06.037770  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:06.037796  177409 cri.go:89] found id: ""
	I1213 00:14:06.037805  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:06.037863  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.042949  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:06.043016  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:06.090863  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:06.090888  177409 cri.go:89] found id: ""
	I1213 00:14:06.090897  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:06.090951  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.103859  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:06.103925  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:06.156957  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:06.156982  177409 cri.go:89] found id: ""
	I1213 00:14:06.156992  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:06.157053  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.162170  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:06.162220  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:06.204839  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:06.204867  177409 cri.go:89] found id: ""
	I1213 00:14:06.204877  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:06.204942  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.210221  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:06.210287  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:06.255881  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:06.255909  177409 cri.go:89] found id: ""
	I1213 00:14:06.255918  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:06.255984  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.260853  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:06.260924  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:06.308377  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:06.308400  177409 cri.go:89] found id: ""
	I1213 00:14:06.308413  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:06.308493  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.315028  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:06.315111  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:06.365453  177409 cri.go:89] found id: ""
	I1213 00:14:06.365484  177409 logs.go:284] 0 containers: []
	W1213 00:14:06.365494  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:06.365507  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:06.365568  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:06.423520  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:06.423545  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:06.423560  177409 cri.go:89] found id: ""
	I1213 00:14:06.423571  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:06.423628  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.429613  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.434283  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:06.434310  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:06.571329  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:06.571375  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:06.613274  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:06.613307  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:06.673407  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:06.673455  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:06.688886  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:06.688933  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:06.733130  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:06.733162  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:06.780131  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:06.780161  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:06.827465  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:06.827500  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:06.880245  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:06.880286  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:06.919735  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:06.919764  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:06.974039  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:06.974074  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:07.400452  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:07.400491  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:07.456759  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:07.456789  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.010686  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:14:10.017803  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I1213 00:14:10.019196  177409 api_server.go:141] control plane version: v1.28.4
	I1213 00:14:10.019216  177409 api_server.go:131] duration metric: took 4.026844615s to wait for apiserver health ...
	I1213 00:14:10.019225  177409 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:10.019251  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:10.019303  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:07.784301  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.785226  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.910151  177307 out.go:204]   - Generating certificates and keys ...
	I1213 00:14:09.910259  177307 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:14:09.910339  177307 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:14:09.910444  177307 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:14:09.910527  177307 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:14:09.910616  177307 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:14:09.910662  177307 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:14:09.910713  177307 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:14:09.910791  177307 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:14:09.910892  177307 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:14:09.911041  177307 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:14:09.911107  177307 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:14:09.911186  177307 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:14:10.262533  177307 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:14:10.508123  177307 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 00:14:10.766822  177307 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:14:10.866565  177307 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:14:11.206659  177307 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:14:11.207238  177307 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:14:11.210018  177307 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:14:10.061672  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.061699  177409 cri.go:89] found id: ""
	I1213 00:14:10.061708  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:10.061769  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.066426  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:10.066491  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:10.107949  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:10.107978  177409 cri.go:89] found id: ""
	I1213 00:14:10.107994  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:10.108053  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.112321  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:10.112393  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:10.169082  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:10.169110  177409 cri.go:89] found id: ""
	I1213 00:14:10.169120  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:10.169175  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.174172  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:10.174225  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:10.220290  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:10.220313  177409 cri.go:89] found id: ""
	I1213 00:14:10.220326  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:10.220384  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.225241  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:10.225310  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:10.271312  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:10.271336  177409 cri.go:89] found id: ""
	I1213 00:14:10.271345  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:10.271401  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.275974  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:10.276049  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:10.324262  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:10.324288  177409 cri.go:89] found id: ""
	I1213 00:14:10.324299  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:10.324360  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.329065  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:10.329130  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:10.375611  177409 cri.go:89] found id: ""
	I1213 00:14:10.375640  177409 logs.go:284] 0 containers: []
	W1213 00:14:10.375648  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:10.375654  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:10.375725  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:10.420778  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:10.420807  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:10.420812  177409 cri.go:89] found id: ""
	I1213 00:14:10.420819  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:10.420866  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.425676  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.430150  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:10.430180  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:10.486314  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:10.486351  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:10.500915  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:10.500946  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:10.543073  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:10.543108  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:10.584779  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:10.584814  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:10.629824  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:10.629852  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:10.756816  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:10.756857  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.807506  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:10.807536  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:10.849398  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:10.849436  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:10.911470  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:10.911508  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:11.288892  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:11.288941  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:11.361299  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:11.361347  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:11.407800  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:11.407850  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:13.965440  177409 system_pods.go:59] 8 kube-system pods found
	I1213 00:14:13.965477  177409 system_pods.go:61] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running
	I1213 00:14:13.965485  177409 system_pods.go:61] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running
	I1213 00:14:13.965493  177409 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running
	I1213 00:14:13.965500  177409 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running
	I1213 00:14:13.965505  177409 system_pods.go:61] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running
	I1213 00:14:13.965509  177409 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running
	I1213 00:14:13.965518  177409 system_pods.go:61] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:13.965528  177409 system_pods.go:61] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running
	I1213 00:14:13.965538  177409 system_pods.go:74] duration metric: took 3.946305195s to wait for pod list to return data ...
	I1213 00:14:13.965548  177409 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:13.969074  177409 default_sa.go:45] found service account: "default"
	I1213 00:14:13.969103  177409 default_sa.go:55] duration metric: took 3.543208ms for default service account to be created ...
	I1213 00:14:13.969114  177409 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:13.977167  177409 system_pods.go:86] 8 kube-system pods found
	I1213 00:14:13.977201  177409 system_pods.go:89] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running
	I1213 00:14:13.977211  177409 system_pods.go:89] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running
	I1213 00:14:13.977219  177409 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running
	I1213 00:14:13.977226  177409 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running
	I1213 00:14:13.977232  177409 system_pods.go:89] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running
	I1213 00:14:13.977238  177409 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running
	I1213 00:14:13.977249  177409 system_pods.go:89] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:13.977257  177409 system_pods.go:89] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running
	I1213 00:14:13.977272  177409 system_pods.go:126] duration metric: took 8.1502ms to wait for k8s-apps to be running ...
	I1213 00:14:13.977288  177409 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:13.977342  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:13.996304  177409 system_svc.go:56] duration metric: took 19.006856ms WaitForService to wait for kubelet.
	I1213 00:14:13.996340  177409 kubeadm.go:581] duration metric: took 4m20.846697962s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:13.996374  177409 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:14.000473  177409 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:14.000505  177409 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:14.000518  177409 node_conditions.go:105] duration metric: took 4.137212ms to run NodePressure ...
	I1213 00:14:14.000534  177409 start.go:228] waiting for startup goroutines ...
	I1213 00:14:14.000544  177409 start.go:233] waiting for cluster config update ...
	I1213 00:14:14.000561  177409 start.go:242] writing updated cluster config ...
	I1213 00:14:14.000901  177409 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:14.059785  177409 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1213 00:14:14.062155  177409 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-743278" cluster and "default" namespace by default
	I1213 00:14:11.212405  177307 out.go:204]   - Booting up control plane ...
	I1213 00:14:11.212538  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:14:11.213865  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:14:11.215312  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:14:11.235356  177307 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:14:11.236645  177307 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:14:11.236755  177307 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1213 00:14:11.385788  177307 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:14:12.284994  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:14.784159  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:19.387966  177307 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002219 seconds
	I1213 00:14:19.402873  177307 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:14:19.424220  177307 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:14:19.954243  177307 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:14:19.954453  177307 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-143586 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 00:14:20.468986  177307 kubeadm.go:322] [bootstrap-token] Using token: nss44e.j85t1ilri9kvvn0e
	I1213 00:14:16.785364  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:19.284214  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:20.470732  177307 out.go:204]   - Configuring RBAC rules ...
	I1213 00:14:20.470866  177307 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:14:20.479490  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 00:14:20.488098  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:14:20.491874  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:14:20.496891  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:14:20.506058  177307 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:14:20.523032  177307 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 00:14:20.796465  177307 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:14:20.892018  177307 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:14:20.892049  177307 kubeadm.go:322] 
	I1213 00:14:20.892159  177307 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:14:20.892185  177307 kubeadm.go:322] 
	I1213 00:14:20.892284  177307 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:14:20.892296  177307 kubeadm.go:322] 
	I1213 00:14:20.892338  177307 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:14:20.892421  177307 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:14:20.892512  177307 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:14:20.892529  177307 kubeadm.go:322] 
	I1213 00:14:20.892620  177307 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1213 00:14:20.892648  177307 kubeadm.go:322] 
	I1213 00:14:20.892734  177307 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 00:14:20.892745  177307 kubeadm.go:322] 
	I1213 00:14:20.892807  177307 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:14:20.892938  177307 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:14:20.893057  177307 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:14:20.893072  177307 kubeadm.go:322] 
	I1213 00:14:20.893182  177307 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 00:14:20.893286  177307 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:14:20.893307  177307 kubeadm.go:322] 
	I1213 00:14:20.893446  177307 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nss44e.j85t1ilri9kvvn0e \
	I1213 00:14:20.893588  177307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:14:20.893625  177307 kubeadm.go:322] 	--control-plane 
	I1213 00:14:20.893634  177307 kubeadm.go:322] 
	I1213 00:14:20.893740  177307 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:14:20.893752  177307 kubeadm.go:322] 
	I1213 00:14:20.893877  177307 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nss44e.j85t1ilri9kvvn0e \
	I1213 00:14:20.894017  177307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:14:20.895217  177307 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:14:20.895249  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:14:20.895261  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:14:20.897262  177307 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:14:20.898838  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:14:20.933446  177307 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:14:20.985336  177307 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:14:20.985435  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:20.985458  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=no-preload-143586 minikube.k8s.io/updated_at=2023_12_13T00_14_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.062513  177307 ops.go:34] apiserver oom_adj: -16
	I1213 00:14:21.374568  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.482135  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:22.088971  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:22.588816  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:23.088960  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:23.588701  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:24.088568  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.783473  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:23.784019  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:25.785712  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:24.588803  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:25.088983  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:25.589097  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:26.088561  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:26.589160  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:27.088601  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:27.588337  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.088578  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.588533  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:29.088398  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.284015  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:30.285509  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:29.588587  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:30.088826  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:30.588871  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:31.089336  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:31.588959  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:32.088390  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:32.589079  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:33.088948  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:33.589067  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:34.089108  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:34.261304  177307 kubeadm.go:1088] duration metric: took 13.275930767s to wait for elevateKubeSystemPrivileges.
	I1213 00:14:34.261367  177307 kubeadm.go:406] StartCluster complete in 5m12.573209179s
	I1213 00:14:34.261392  177307 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:14:34.261511  177307 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:14:34.264237  177307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:14:34.264668  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:14:34.264951  177307 config.go:182] Loaded profile config "no-preload-143586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:14:34.265065  177307 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:14:34.265128  177307 addons.go:69] Setting storage-provisioner=true in profile "no-preload-143586"
	I1213 00:14:34.265150  177307 addons.go:231] Setting addon storage-provisioner=true in "no-preload-143586"
	W1213 00:14:34.265161  177307 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:14:34.265202  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.265231  177307 addons.go:69] Setting default-storageclass=true in profile "no-preload-143586"
	I1213 00:14:34.265262  177307 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-143586"
	I1213 00:14:34.265606  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.265612  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.265627  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.265628  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.265846  177307 addons.go:69] Setting metrics-server=true in profile "no-preload-143586"
	I1213 00:14:34.265878  177307 addons.go:231] Setting addon metrics-server=true in "no-preload-143586"
	W1213 00:14:34.265890  177307 addons.go:240] addon metrics-server should already be in state true
	I1213 00:14:34.265935  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.266231  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.266277  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.287844  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34161
	I1213 00:14:34.287882  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I1213 00:14:34.287968  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
	I1213 00:14:34.288509  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.288529  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.288811  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.289178  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289197  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289310  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289325  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289335  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289347  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289707  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289713  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289736  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289891  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.290392  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.290398  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.290415  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.290417  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.293696  177307 addons.go:231] Setting addon default-storageclass=true in "no-preload-143586"
	W1213 00:14:34.293725  177307 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:14:34.293756  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.294150  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.294187  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.309103  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I1213 00:14:34.309683  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.310362  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.310387  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.310830  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.311091  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.312755  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37041
	I1213 00:14:34.313192  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.313601  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.313796  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.313814  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.316496  177307 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:14:34.314223  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.316102  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41945
	I1213 00:14:34.318112  177307 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:14:34.318127  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:14:34.318144  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.318260  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.318670  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.318693  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.319401  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.319422  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.319860  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.320080  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.321977  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.323695  177307 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:14:34.322509  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.325025  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:14:34.325037  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:14:34.325053  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.323731  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.325089  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.323250  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.325250  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.325428  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.325563  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.328055  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.328364  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.328386  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.328712  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.328867  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.328980  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.329099  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.339175  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35817
	I1213 00:14:34.339820  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.340300  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.340314  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.340662  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.340821  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.342399  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.342673  177307 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:14:34.342694  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:14:34.342720  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.345475  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.345804  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.345839  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.346062  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.346256  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.346453  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.346622  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.425634  177307 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-143586" context rescaled to 1 replicas
	I1213 00:14:34.425672  177307 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:14:34.427471  177307 out.go:177] * Verifying Kubernetes components...
	I1213 00:14:32.783642  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:34.786810  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:34.428983  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:34.589995  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:14:34.590692  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:14:34.592452  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:14:34.592472  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:14:34.643312  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:14:34.643336  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:14:34.649786  177307 node_ready.go:35] waiting up to 6m0s for node "no-preload-143586" to be "Ready" ...
	I1213 00:14:34.649926  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:14:34.683306  177307 node_ready.go:49] node "no-preload-143586" has status "Ready":"True"
	I1213 00:14:34.683339  177307 node_ready.go:38] duration metric: took 33.525188ms waiting for node "no-preload-143586" to be "Ready" ...
	I1213 00:14:34.683352  177307 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:34.711542  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:14:34.711570  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:14:34.738788  177307 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-8fb8b" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:34.823110  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:14:35.743550  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.153515373s)
	I1213 00:14:35.743618  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.743634  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.743661  177307 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.093703901s)
	I1213 00:14:35.743611  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.152891747s)
	I1213 00:14:35.743699  177307 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1213 00:14:35.743719  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.743732  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.744060  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.744059  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.744088  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.744100  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.744114  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.744114  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.744158  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.744195  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.744209  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.744223  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.745779  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.745829  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.745855  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.745838  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.745797  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.745790  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.757271  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.757292  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.757758  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.757776  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.757787  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:36.114702  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291538738s)
	I1213 00:14:36.114760  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:36.114773  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:36.115132  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:36.115149  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:36.115158  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:36.115168  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:36.115411  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:36.115426  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:36.115436  177307 addons.go:467] Verifying addon metrics-server=true in "no-preload-143586"
	I1213 00:14:36.117975  177307 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1213 00:14:36.119554  177307 addons.go:502] enable addons completed in 1.85448385s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1213 00:14:37.069993  177307 pod_ready.go:102] pod "coredns-76f75df574-8fb8b" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:38.563525  177307 pod_ready.go:92] pod "coredns-76f75df574-8fb8b" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.563551  177307 pod_ready.go:81] duration metric: took 3.824732725s waiting for pod "coredns-76f75df574-8fb8b" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.563561  177307 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hs9rg" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.565949  177307 pod_ready.go:97] error getting pod "coredns-76f75df574-hs9rg" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-hs9rg" not found
	I1213 00:14:38.565976  177307 pod_ready.go:81] duration metric: took 2.409349ms waiting for pod "coredns-76f75df574-hs9rg" in "kube-system" namespace to be "Ready" ...
	E1213 00:14:38.565984  177307 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-hs9rg" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-hs9rg" not found
	I1213 00:14:38.565990  177307 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.571396  177307 pod_ready.go:92] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.571416  177307 pod_ready.go:81] duration metric: took 5.419634ms waiting for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.571424  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.576228  177307 pod_ready.go:92] pod "kube-apiserver-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.576248  177307 pod_ready.go:81] duration metric: took 4.818853ms waiting for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.576256  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.581260  177307 pod_ready.go:92] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.581281  177307 pod_ready.go:81] duration metric: took 5.019621ms waiting for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.581289  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xsdtr" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.760984  177307 pod_ready.go:92] pod "kube-proxy-xsdtr" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.761006  177307 pod_ready.go:81] duration metric: took 179.711484ms waiting for pod "kube-proxy-xsdtr" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.761015  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:39.160713  177307 pod_ready.go:92] pod "kube-scheduler-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:39.160738  177307 pod_ready.go:81] duration metric: took 399.716844ms waiting for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:39.160746  177307 pod_ready.go:38] duration metric: took 4.477382003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:39.160762  177307 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:39.160809  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:39.176747  177307 api_server.go:72] duration metric: took 4.751030848s to wait for apiserver process to appear ...
	I1213 00:14:39.176774  177307 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:39.176791  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:14:39.183395  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 200:
	ok
	I1213 00:14:39.184769  177307 api_server.go:141] control plane version: v1.29.0-rc.2
	I1213 00:14:39.184789  177307 api_server.go:131] duration metric: took 8.009007ms to wait for apiserver health ...
	I1213 00:14:39.184799  177307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:39.364215  177307 system_pods.go:59] 8 kube-system pods found
	I1213 00:14:39.364251  177307 system_pods.go:61] "coredns-76f75df574-8fb8b" [3ca4e237-6a35-4e8f-9731-3f9655eba995] Running
	I1213 00:14:39.364256  177307 system_pods.go:61] "etcd-no-preload-143586" [205757f5-5bd7-416c-af72-6ebf428d7302] Running
	I1213 00:14:39.364260  177307 system_pods.go:61] "kube-apiserver-no-preload-143586" [a479caf2-8ff4-4b78-8d93-e8f672a853b9] Running
	I1213 00:14:39.364265  177307 system_pods.go:61] "kube-controller-manager-no-preload-143586" [4afd5e8a-64b3-4e0c-a723-b6a6bd2445d4] Running
	I1213 00:14:39.364269  177307 system_pods.go:61] "kube-proxy-xsdtr" [23a261a4-17d1-4657-8052-02b71055c850] Running
	I1213 00:14:39.364273  177307 system_pods.go:61] "kube-scheduler-no-preload-143586" [71312243-0ba6-40e0-80a5-c1652b5270e9] Running
	I1213 00:14:39.364280  177307 system_pods.go:61] "metrics-server-57f55c9bc5-q7v45" [1579f5c9-d574-4ab8-9add-e89621b9c203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:39.364284  177307 system_pods.go:61] "storage-provisioner" [400b27cc-1713-4201-8097-3e3fd8004690] Running
	I1213 00:14:39.364292  177307 system_pods.go:74] duration metric: took 179.488069ms to wait for pod list to return data ...
	I1213 00:14:39.364301  177307 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:39.560330  177307 default_sa.go:45] found service account: "default"
	I1213 00:14:39.560364  177307 default_sa.go:55] duration metric: took 196.056049ms for default service account to be created ...
	I1213 00:14:39.560376  177307 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:39.763340  177307 system_pods.go:86] 8 kube-system pods found
	I1213 00:14:39.763384  177307 system_pods.go:89] "coredns-76f75df574-8fb8b" [3ca4e237-6a35-4e8f-9731-3f9655eba995] Running
	I1213 00:14:39.763393  177307 system_pods.go:89] "etcd-no-preload-143586" [205757f5-5bd7-416c-af72-6ebf428d7302] Running
	I1213 00:14:39.763400  177307 system_pods.go:89] "kube-apiserver-no-preload-143586" [a479caf2-8ff4-4b78-8d93-e8f672a853b9] Running
	I1213 00:14:39.763405  177307 system_pods.go:89] "kube-controller-manager-no-preload-143586" [4afd5e8a-64b3-4e0c-a723-b6a6bd2445d4] Running
	I1213 00:14:39.763409  177307 system_pods.go:89] "kube-proxy-xsdtr" [23a261a4-17d1-4657-8052-02b71055c850] Running
	I1213 00:14:39.763414  177307 system_pods.go:89] "kube-scheduler-no-preload-143586" [71312243-0ba6-40e0-80a5-c1652b5270e9] Running
	I1213 00:14:39.763426  177307 system_pods.go:89] "metrics-server-57f55c9bc5-q7v45" [1579f5c9-d574-4ab8-9add-e89621b9c203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:39.763434  177307 system_pods.go:89] "storage-provisioner" [400b27cc-1713-4201-8097-3e3fd8004690] Running
	I1213 00:14:39.763449  177307 system_pods.go:126] duration metric: took 203.065345ms to wait for k8s-apps to be running ...
	I1213 00:14:39.763458  177307 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:39.763517  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:39.783072  177307 system_svc.go:56] duration metric: took 19.601725ms WaitForService to wait for kubelet.
	I1213 00:14:39.783120  177307 kubeadm.go:581] duration metric: took 5.357406192s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:39.783147  177307 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:39.962475  177307 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:39.962501  177307 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:39.962511  177307 node_conditions.go:105] duration metric: took 179.359327ms to run NodePressure ...
	I1213 00:14:39.962524  177307 start.go:228] waiting for startup goroutines ...
	I1213 00:14:39.962532  177307 start.go:233] waiting for cluster config update ...
	I1213 00:14:39.962544  177307 start.go:242] writing updated cluster config ...
	I1213 00:14:39.962816  177307 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:40.016206  177307 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1213 00:14:40.018375  177307 out.go:177] * Done! kubectl is now configured to use "no-preload-143586" cluster and "default" namespace by default
	I1213 00:14:37.286105  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:39.786060  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:42.285678  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:44.784213  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:47.285680  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:49.783428  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:51.785923  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:54.283780  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:56.783343  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:59.283053  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:15:00.976984  176813 pod_ready.go:81] duration metric: took 4m0.000041493s waiting for pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace to be "Ready" ...
	E1213 00:15:00.977016  176813 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:15:00.977037  176813 pod_ready.go:38] duration metric: took 4m1.19985839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:00.977064  176813 kubeadm.go:640] restartCluster took 5m6.659231001s
	W1213 00:15:00.977141  176813 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:15:00.977178  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:15:07.653665  176813 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.676456274s)
	I1213 00:15:07.653745  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:15:07.673981  176813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:15:07.688018  176813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:15:07.699196  176813 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:15:07.699244  176813 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1213 00:15:07.761890  176813 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1213 00:15:07.762010  176813 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:15:07.921068  176813 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:15:07.921220  176813 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:15:07.921360  176813 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:15:08.151937  176813 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:15:08.152063  176813 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:15:08.159296  176813 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1213 00:15:08.285060  176813 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:15:08.286903  176813 out.go:204]   - Generating certificates and keys ...
	I1213 00:15:08.287074  176813 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:15:08.287174  176813 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:15:08.290235  176813 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:15:08.290397  176813 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:15:08.290878  176813 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:15:08.291179  176813 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:15:08.291663  176813 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:15:08.292342  176813 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:15:08.292822  176813 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:15:08.293259  176813 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:15:08.293339  176813 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:15:08.293429  176813 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:15:08.526145  176813 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:15:08.586842  176813 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:15:08.636575  176813 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:15:08.706448  176813 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:15:08.710760  176813 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:15:08.713664  176813 out.go:204]   - Booting up control plane ...
	I1213 00:15:08.713773  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:15:08.718431  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:15:08.719490  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:15:08.720327  176813 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:15:08.722707  176813 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:15:19.226839  176813 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503804 seconds
	I1213 00:15:19.227005  176813 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:15:19.245054  176813 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:15:19.773910  176813 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:15:19.774100  176813 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-508612 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1213 00:15:20.284136  176813 kubeadm.go:322] [bootstrap-token] Using token: lgq05i.maaa534t8w734gvq
	I1213 00:15:20.286042  176813 out.go:204]   - Configuring RBAC rules ...
	I1213 00:15:20.286186  176813 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:15:20.297875  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:15:20.305644  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:15:20.314089  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:15:20.319091  176813 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:15:20.387872  176813 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:15:20.733546  176813 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:15:20.735072  176813 kubeadm.go:322] 
	I1213 00:15:20.735157  176813 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:15:20.735168  176813 kubeadm.go:322] 
	I1213 00:15:20.735280  176813 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:15:20.735291  176813 kubeadm.go:322] 
	I1213 00:15:20.735314  176813 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:15:20.735389  176813 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:15:20.735451  176813 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:15:20.735459  176813 kubeadm.go:322] 
	I1213 00:15:20.735517  176813 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:15:20.735602  176813 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:15:20.735660  176813 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:15:20.735666  176813 kubeadm.go:322] 
	I1213 00:15:20.735757  176813 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1213 00:15:20.735867  176813 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:15:20.735889  176813 kubeadm.go:322] 
	I1213 00:15:20.736036  176813 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lgq05i.maaa534t8w734gvq \
	I1213 00:15:20.736152  176813 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:15:20.736223  176813 kubeadm.go:322]     --control-plane 	  
	I1213 00:15:20.736240  176813 kubeadm.go:322] 
	I1213 00:15:20.736348  176813 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:15:20.736357  176813 kubeadm.go:322] 
	I1213 00:15:20.736472  176813 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lgq05i.maaa534t8w734gvq \
	I1213 00:15:20.736596  176813 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:15:20.737307  176813 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:15:20.737332  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:15:20.737340  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:15:20.739085  176813 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:15:20.740295  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:15:20.749618  176813 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:15:20.767876  176813 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:15:20.767933  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:20.767984  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=old-k8s-version-508612 minikube.k8s.io/updated_at=2023_12_13T00_15_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.051677  176813 ops.go:34] apiserver oom_adj: -16
	I1213 00:15:21.051709  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.148546  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.741424  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:22.240885  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:22.741651  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:23.241662  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:23.741098  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:24.241530  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:24.741035  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:25.241391  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:25.741004  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:26.241402  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:26.741333  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:27.241828  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:27.741151  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:28.240933  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:28.741661  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:29.241431  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:29.741667  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:30.241070  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:30.741117  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:31.241355  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:31.741697  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:32.241779  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:32.741165  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:33.241739  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:33.741499  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:34.241477  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:34.740804  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:35.241596  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:35.374344  176813 kubeadm.go:1088] duration metric: took 14.606462065s to wait for elevateKubeSystemPrivileges.
	I1213 00:15:35.374388  176813 kubeadm.go:406] StartCluster complete in 5m41.120911791s
	I1213 00:15:35.374416  176813 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:15:35.374522  176813 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:15:35.376587  176813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:15:35.376829  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:15:35.376896  176813 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:15:35.376998  176813 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377018  176813 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377026  176813 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-508612"
	W1213 00:15:35.377036  176813 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:15:35.377038  176813 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377075  176813 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-508612"
	W1213 00:15:35.377089  176813 addons.go:240] addon metrics-server should already be in state true
	I1213 00:15:35.377107  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.377140  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.377536  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.377569  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.377577  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.377603  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.377036  176813 config.go:182] Loaded profile config "old-k8s-version-508612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1213 00:15:35.377038  176813 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-508612"
	I1213 00:15:35.378232  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.378269  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.396758  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I1213 00:15:35.397242  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.397563  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
	I1213 00:15:35.397732  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I1213 00:15:35.398240  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.398249  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.398768  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.398789  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.398927  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.398944  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.399039  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.399048  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.399144  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399485  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399506  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399699  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.399783  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.399822  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.400014  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.400052  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.403424  176813 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-508612"
	W1213 00:15:35.403445  176813 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:15:35.403470  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.403784  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.403809  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.419742  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I1213 00:15:35.419763  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I1213 00:15:35.420351  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.420378  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.420912  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.420927  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.421042  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.421062  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.421403  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.421450  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.421588  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.421633  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.422473  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I1213 00:15:35.423216  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.423818  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.423875  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.423890  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.426328  176813 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:15:35.424310  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.424522  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.428333  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:15:35.428351  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:15:35.428377  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.430256  176813 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:15:35.428950  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.430439  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.431959  176813 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:15:35.431260  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.431816  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.432011  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.431977  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:15:35.432031  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.432047  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.432199  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.432359  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.432587  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.434239  176813 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-508612" context rescaled to 1 replicas
	I1213 00:15:35.434275  176813 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:15:35.435769  176813 out.go:177] * Verifying Kubernetes components...
	I1213 00:15:35.437082  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:15:35.434982  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.435627  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.437148  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.437186  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.437343  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.437515  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.437646  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.450115  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I1213 00:15:35.450582  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.451077  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.451104  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.451548  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.451822  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.453721  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.454034  176813 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:15:35.454052  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:15:35.454072  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.456976  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.457326  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.457351  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.457530  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.457709  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.457859  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.458008  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.599631  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:15:35.607268  176813 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-508612" to be "Ready" ...
	I1213 00:15:35.607407  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:15:35.627686  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:15:35.627720  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:15:35.641865  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:15:35.653972  176813 node_ready.go:49] node "old-k8s-version-508612" has status "Ready":"True"
	I1213 00:15:35.654008  176813 node_ready.go:38] duration metric: took 46.699606ms waiting for node "old-k8s-version-508612" to be "Ready" ...
	I1213 00:15:35.654022  176813 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:35.701904  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:15:35.701939  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:15:35.722752  176813 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:35.779684  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:15:35.779719  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:15:35.871071  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:15:36.486377  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486409  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486428  176813 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1213 00:15:36.486495  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486513  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486715  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.486725  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.486734  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486741  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486816  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.486826  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.486834  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486843  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.487015  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.487022  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.487048  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.487156  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.487172  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.487186  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.535004  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.535026  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.535335  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.535394  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.535407  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.671282  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.671308  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.671649  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.671719  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.671739  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.671758  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.671771  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.672067  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.672091  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.672092  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.672102  176813 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-508612"
	I1213 00:15:36.673881  176813 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1213 00:15:36.675200  176813 addons.go:502] enable addons completed in 1.298322525s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1213 00:15:37.860212  176813 pod_ready.go:102] pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace has status "Ready":"False"
	I1213 00:15:40.350347  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace has status "Ready":"True"
	I1213 00:15:40.350370  176813 pod_ready.go:81] duration metric: took 4.627584432s waiting for pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.350383  176813 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wz29m" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.356218  176813 pod_ready.go:92] pod "kube-proxy-wz29m" in "kube-system" namespace has status "Ready":"True"
	I1213 00:15:40.356240  176813 pod_ready.go:81] duration metric: took 5.84816ms waiting for pod "kube-proxy-wz29m" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.356252  176813 pod_ready.go:38] duration metric: took 4.702215033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:40.356270  176813 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:15:40.356324  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:15:40.372391  176813 api_server.go:72] duration metric: took 4.938079614s to wait for apiserver process to appear ...
	I1213 00:15:40.372424  176813 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:15:40.372459  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:15:40.378882  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1213 00:15:40.379747  176813 api_server.go:141] control plane version: v1.16.0
	I1213 00:15:40.379770  176813 api_server.go:131] duration metric: took 7.338199ms to wait for apiserver health ...
	I1213 00:15:40.379780  176813 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:15:40.383090  176813 system_pods.go:59] 4 kube-system pods found
	I1213 00:15:40.383110  176813 system_pods.go:61] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.383115  176813 system_pods.go:61] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.383121  176813 system_pods.go:61] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.383126  176813 system_pods.go:61] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.383133  176813 system_pods.go:74] duration metric: took 3.346988ms to wait for pod list to return data ...
	I1213 00:15:40.383140  176813 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:15:40.385822  176813 default_sa.go:45] found service account: "default"
	I1213 00:15:40.385843  176813 default_sa.go:55] duration metric: took 2.696485ms for default service account to be created ...
	I1213 00:15:40.385851  176813 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:15:40.390030  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.390056  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.390061  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.390068  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.390072  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.390094  176813 retry.go:31] will retry after 206.30305ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:40.602546  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.602577  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.602582  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.602589  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.602593  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.602611  176813 retry.go:31] will retry after 375.148566ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:40.987598  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.987626  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.987631  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.987639  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.987645  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.987663  176813 retry.go:31] will retry after 354.607581ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:41.347931  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:41.347965  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:41.347974  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:41.347984  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:41.347992  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:41.348012  176813 retry.go:31] will retry after 443.179207ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:41.796661  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:41.796687  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:41.796692  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:41.796711  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:41.796716  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:41.796733  176813 retry.go:31] will retry after 468.875458ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:42.271565  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:42.271591  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:42.271596  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:42.271603  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:42.271608  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:42.271624  176813 retry.go:31] will retry after 696.629881ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:42.974971  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:42.974997  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:42.975003  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:42.975009  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:42.975015  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:42.975031  176813 retry.go:31] will retry after 830.83436ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:43.810755  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:43.810784  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:43.810792  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:43.810802  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:43.810808  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:43.810830  176813 retry.go:31] will retry after 1.429308487s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:45.245813  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:45.245844  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:45.245852  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:45.245862  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:45.245867  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:45.245887  176813 retry.go:31] will retry after 1.715356562s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:46.966484  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:46.966512  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:46.966517  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:46.966523  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:46.966529  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:46.966546  176813 retry.go:31] will retry after 2.125852813s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:49.097419  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:49.097450  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:49.097460  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:49.097472  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:49.097478  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:49.097496  176813 retry.go:31] will retry after 2.902427415s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:52.005062  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:52.005097  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:52.005106  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:52.005119  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:52.005128  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:52.005154  176813 retry.go:31] will retry after 3.461524498s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:55.471450  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:55.471474  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:55.471480  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:55.471487  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:55.471492  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:55.471509  176813 retry.go:31] will retry after 2.969353102s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:58.445285  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:58.445316  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:58.445324  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:58.445334  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:58.445341  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:58.445363  176813 retry.go:31] will retry after 3.938751371s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:02.389811  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:02.389839  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:02.389845  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:02.389851  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:02.389856  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:02.389873  176813 retry.go:31] will retry after 5.281550171s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:07.676759  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:07.676786  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:07.676791  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:07.676798  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:07.676802  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:07.676820  176813 retry.go:31] will retry after 8.193775139s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:15.875917  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:15.875946  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:15.875951  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:15.875958  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:15.875962  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:15.875980  176813 retry.go:31] will retry after 8.515960159s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:24.397972  176813 system_pods.go:86] 5 kube-system pods found
	I1213 00:16:24.398006  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:24.398014  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:24.398021  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:24.398032  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:24.398039  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:24.398060  176813 retry.go:31] will retry after 10.707543157s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:35.112639  176813 system_pods.go:86] 7 kube-system pods found
	I1213 00:16:35.112667  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:35.112672  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:35.112677  176813 system_pods.go:89] "kube-controller-manager-old-k8s-version-508612" [c6a195a2-e710-4791-b0a3-32618d3c752c] Running
	I1213 00:16:35.112681  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:35.112685  176813 system_pods.go:89] "kube-scheduler-old-k8s-version-508612" [6ee728bb-d81b-43f2-8aa3-4848871a8f41] Running
	I1213 00:16:35.112691  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:35.112696  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:35.112712  176813 retry.go:31] will retry after 13.429366805s: missing components: kube-apiserver
	I1213 00:16:48.550673  176813 system_pods.go:86] 8 kube-system pods found
	I1213 00:16:48.550704  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:48.550710  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:48.550714  176813 system_pods.go:89] "kube-apiserver-old-k8s-version-508612" [1473501b-d17d-4bbb-a61a-1d244f54f70c] Running
	I1213 00:16:48.550718  176813 system_pods.go:89] "kube-controller-manager-old-k8s-version-508612" [c6a195a2-e710-4791-b0a3-32618d3c752c] Running
	I1213 00:16:48.550722  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:48.550726  176813 system_pods.go:89] "kube-scheduler-old-k8s-version-508612" [6ee728bb-d81b-43f2-8aa3-4848871a8f41] Running
	I1213 00:16:48.550733  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:48.550737  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:48.550747  176813 system_pods.go:126] duration metric: took 1m8.164889078s to wait for k8s-apps to be running ...
	I1213 00:16:48.550756  176813 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:16:48.550811  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:16:48.568833  176813 system_svc.go:56] duration metric: took 18.062353ms WaitForService to wait for kubelet.
	I1213 00:16:48.568876  176813 kubeadm.go:581] duration metric: took 1m13.134572871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:16:48.568901  176813 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:16:48.573103  176813 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:16:48.573128  176813 node_conditions.go:123] node cpu capacity is 2
	I1213 00:16:48.573137  176813 node_conditions.go:105] duration metric: took 4.231035ms to run NodePressure ...
	I1213 00:16:48.573148  176813 start.go:228] waiting for startup goroutines ...
	I1213 00:16:48.573154  176813 start.go:233] waiting for cluster config update ...
	I1213 00:16:48.573163  176813 start.go:242] writing updated cluster config ...
	I1213 00:16:48.573436  176813 ssh_runner.go:195] Run: rm -f paused
	I1213 00:16:48.627109  176813 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1213 00:16:48.628688  176813 out.go:177] 
	W1213 00:16:48.630154  176813 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1213 00:16:48.631498  176813 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1213 00:16:48.633089  176813 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-508612" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-13 00:08:25 UTC, ends at Wed 2023-12-13 00:23:05 UTC. --
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.737482687Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0d17c42c09c56f857947503ff059c73b2692795fd48cd37dede5099ef99bd8c,PodSandboxId:2afacfbbbbfe13da138f95ddca98b10ce74facab728ed724a88c7f181212cb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426441639011725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816660d7-a041-4695-b7da-d977b8891935,},Annotations:map[string]string{io.kubernetes.container.hash: bd5eb70a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339d0782bfacf9b197014c96754a0f59569fcfb63fce1bb4f90bed7d66518056,PodSandboxId:759ce7bd9ba388e0908284d15942a340aa84a30244f472c249403d48c5b75d08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426441136919423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ccq47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68f3c55f-175e-40af-a769-65c859d5012d,},Annotations:map[string]string{io.kubernetes.container.hash: 771d23cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8146da064c98405a2d2cdcd64c3f2a6e5580e6a9bbfadac2bdfc875edeb7378,PodSandboxId:1002d62d8148b0ca1dc41ef56f63c1743a44b012fe1b5ce44abd5346e5d2513e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426440542875459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gs4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4b86e83-a0a1-4bf8-958e-e154e91f47ef,},Annotations:map[string]string{io.kubernetes.container.hash: bea787ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42423e8c2a4c3add467f60fe17710b5a0f6c79b2384aa761d1aad5e15519f61,PodSandboxId:1fd6c600d898c6263f1f31c9cea5336253a12742e56b8d9d9eff498cffcfec94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426417123895163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: c31cdd67a6e054cf9c0b1601f37db20e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad38ad2ba8d7ef925b8a1713b6636c494e5fc4aae29f70052c75a59a0f5f6503,PodSandboxId:daec354eb5e8a6856e6964fa58f444f5407026303401242d34c88de2533b6449,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426416912822073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0824a86eab624ba769ff3e04bee2867a,},Annotations:
map[string]string{io.kubernetes.container.hash: f29cbe9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c402daaf59971db0446bc27bb2a56f520952f33b29a0226962d19dbfa6ab69f9,PodSandboxId:da1c47eff717913d3accff39043e87ca6bffed2075e1ff9aeb17a728ceba8468,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426416763737114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fdb93043e71a6cbe9511612a78a69a1,},Annotations:map[string
]string{io.kubernetes.container.hash: 60ab00ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b771f8110ea5225d736a60f1faa7ddc9ac8c341202c9bff8c1c82f46d16082c1,PodSandboxId:154d2c4d08454757e63a28ea0f551da1c3b91db6b21adb01eb12bf0017127246,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426416669288068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb76d93a779cccf3f04273dc3f836d
5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6c1ba649-b879-4e03-950c-e5d108a62be3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.774392601Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8d290b91-0e0c-4347-99f6-2c594d87237a name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.774487511Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8d290b91-0e0c-4347-99f6-2c594d87237a name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.775706386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6783baa8-4bd6-47bc-b936-e49dab58af1f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.776367768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702426985776349747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6783baa8-4bd6-47bc-b936-e49dab58af1f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.777569214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0decba4e-f15a-41c9-834f-5d048f4775ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.777665128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0decba4e-f15a-41c9-834f-5d048f4775ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.777936063Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0d17c42c09c56f857947503ff059c73b2692795fd48cd37dede5099ef99bd8c,PodSandboxId:2afacfbbbbfe13da138f95ddca98b10ce74facab728ed724a88c7f181212cb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426441639011725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816660d7-a041-4695-b7da-d977b8891935,},Annotations:map[string]string{io.kubernetes.container.hash: bd5eb70a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339d0782bfacf9b197014c96754a0f59569fcfb63fce1bb4f90bed7d66518056,PodSandboxId:759ce7bd9ba388e0908284d15942a340aa84a30244f472c249403d48c5b75d08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426441136919423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ccq47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68f3c55f-175e-40af-a769-65c859d5012d,},Annotations:map[string]string{io.kubernetes.container.hash: 771d23cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8146da064c98405a2d2cdcd64c3f2a6e5580e6a9bbfadac2bdfc875edeb7378,PodSandboxId:1002d62d8148b0ca1dc41ef56f63c1743a44b012fe1b5ce44abd5346e5d2513e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426440542875459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gs4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4b86e83-a0a1-4bf8-958e-e154e91f47ef,},Annotations:map[string]string{io.kubernetes.container.hash: bea787ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42423e8c2a4c3add467f60fe17710b5a0f6c79b2384aa761d1aad5e15519f61,PodSandboxId:1fd6c600d898c6263f1f31c9cea5336253a12742e56b8d9d9eff498cffcfec94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426417123895163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: c31cdd67a6e054cf9c0b1601f37db20e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad38ad2ba8d7ef925b8a1713b6636c494e5fc4aae29f70052c75a59a0f5f6503,PodSandboxId:daec354eb5e8a6856e6964fa58f444f5407026303401242d34c88de2533b6449,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426416912822073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0824a86eab624ba769ff3e04bee2867a,},Annotations:
map[string]string{io.kubernetes.container.hash: f29cbe9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c402daaf59971db0446bc27bb2a56f520952f33b29a0226962d19dbfa6ab69f9,PodSandboxId:da1c47eff717913d3accff39043e87ca6bffed2075e1ff9aeb17a728ceba8468,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426416763737114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fdb93043e71a6cbe9511612a78a69a1,},Annotations:map[string
]string{io.kubernetes.container.hash: 60ab00ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b771f8110ea5225d736a60f1faa7ddc9ac8c341202c9bff8c1c82f46d16082c1,PodSandboxId:154d2c4d08454757e63a28ea0f551da1c3b91db6b21adb01eb12bf0017127246,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426416669288068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb76d93a779cccf3f04273dc3f836d
5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0decba4e-f15a-41c9-834f-5d048f4775ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.790285880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e8c3dcf7-e442-44ab-9e63-f8a9bcb052a4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.790387283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e8c3dcf7-e442-44ab-9e63-f8a9bcb052a4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.790608720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0d17c42c09c56f857947503ff059c73b2692795fd48cd37dede5099ef99bd8c,PodSandboxId:2afacfbbbbfe13da138f95ddca98b10ce74facab728ed724a88c7f181212cb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426441639011725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816660d7-a041-4695-b7da-d977b8891935,},Annotations:map[string]string{io.kubernetes.container.hash: bd5eb70a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339d0782bfacf9b197014c96754a0f59569fcfb63fce1bb4f90bed7d66518056,PodSandboxId:759ce7bd9ba388e0908284d15942a340aa84a30244f472c249403d48c5b75d08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426441136919423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ccq47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68f3c55f-175e-40af-a769-65c859d5012d,},Annotations:map[string]string{io.kubernetes.container.hash: 771d23cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8146da064c98405a2d2cdcd64c3f2a6e5580e6a9bbfadac2bdfc875edeb7378,PodSandboxId:1002d62d8148b0ca1dc41ef56f63c1743a44b012fe1b5ce44abd5346e5d2513e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426440542875459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gs4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4b86e83-a0a1-4bf8-958e-e154e91f47ef,},Annotations:map[string]string{io.kubernetes.container.hash: bea787ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42423e8c2a4c3add467f60fe17710b5a0f6c79b2384aa761d1aad5e15519f61,PodSandboxId:1fd6c600d898c6263f1f31c9cea5336253a12742e56b8d9d9eff498cffcfec94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426417123895163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: c31cdd67a6e054cf9c0b1601f37db20e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad38ad2ba8d7ef925b8a1713b6636c494e5fc4aae29f70052c75a59a0f5f6503,PodSandboxId:daec354eb5e8a6856e6964fa58f444f5407026303401242d34c88de2533b6449,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426416912822073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0824a86eab624ba769ff3e04bee2867a,},Annotations:
map[string]string{io.kubernetes.container.hash: f29cbe9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c402daaf59971db0446bc27bb2a56f520952f33b29a0226962d19dbfa6ab69f9,PodSandboxId:da1c47eff717913d3accff39043e87ca6bffed2075e1ff9aeb17a728ceba8468,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426416763737114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fdb93043e71a6cbe9511612a78a69a1,},Annotations:map[string
]string{io.kubernetes.container.hash: 60ab00ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b771f8110ea5225d736a60f1faa7ddc9ac8c341202c9bff8c1c82f46d16082c1,PodSandboxId:154d2c4d08454757e63a28ea0f551da1c3b91db6b21adb01eb12bf0017127246,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426416669288068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb76d93a779cccf3f04273dc3f836d
5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e8c3dcf7-e442-44ab-9e63-f8a9bcb052a4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.791946867Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e0d17c42c09c56f857947503ff059c73b2692795fd48cd37dede5099ef99bd8c,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=8099e874-283b-4c3f-8d74-22e0130917bf name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.792082094Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e0d17c42c09c56f857947503ff059c73b2692795fd48cd37dede5099ef99bd8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1702426441742167335,StartedAt:1702426441795455436,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816660d7-a041-4695-b7da-d977b8891935,},Annotations:map[string]string{io.kubernetes.container.hash: bd5eb70a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/816660d7-a041-4695-b7da-d977b8891935/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/816660d7-a041-4695-b7da-d977b8891935/containers/storage-provisioner/f1a837c6,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/816660d7-a041-4695-b7da-d977b8891935/volumes/kubernetes.io~projected/kube-api-access-vgj85,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_816660d7-a041-4695-b7da-d977b8891935/storage-provisioner/0
.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=8099e874-283b-4c3f-8d74-22e0130917bf name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.792542174Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:339d0782bfacf9b197014c96754a0f59569fcfb63fce1bb4f90bed7d66518056,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=67a7123b-6ff0-41f7-b898-860e2da62d4c name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.792663486Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:339d0782bfacf9b197014c96754a0f59569fcfb63fce1bb4f90bed7d66518056,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1702426441469541777,StartedAt:1702426441509055875,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.28.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ccq47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68f3c55f-175e-40af-a769-65c859d5012d,},Annotations:map[string]string{io.kubernetes.container.hash: 771d23cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/68f3c55f-175e-40af-a769-65c859d5012d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/68f3c55f-175e-40af-a769-65c859d5012d/containers/kube-proxy/19714bdc,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kubelet/pods/68f3c55f-175e-40af-a769-65c859d5012d/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io
/serviceaccount,HostPath:/var/lib/kubelet/pods/68f3c55f-175e-40af-a769-65c859d5012d/volumes/kubernetes.io~projected/kube-api-access-h4mvd,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-proxy-ccq47_68f3c55f-175e-40af-a769-65c859d5012d/kube-proxy/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=67a7123b-6ff0-41f7-b898-860e2da62d4c name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.793278463Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:c8146da064c98405a2d2cdcd64c3f2a6e5580e6a9bbfadac2bdfc875edeb7378,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=c6e37cda-09d0-4812-958b-189bae58770e name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.793392010Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:c8146da064c98405a2d2cdcd64c3f2a6e5580e6a9bbfadac2bdfc875edeb7378,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1702426440722353152,StartedAt:1702426440781974322,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.10.1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gs4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4b86e83-a0a1-4bf8-958e-e154e91f47ef,},Annotations:map[string]string{io.kubernetes.container.hash: bea787ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/d4b86e83-a0a1-4bf8-958e-e154e91f47ef/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d4b86e83-a0a1-4bf8-958e-e154e91f47ef/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d4b86e83-a0a1-4bf8-958e-e154e91f47ef/containers/coredns/7aa11540,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/
lib/kubelet/pods/d4b86e83-a0a1-4bf8-958e-e154e91f47ef/volumes/kubernetes.io~projected/kube-api-access-sgr89,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_coredns-5dd5756b68-gs4kb_d4b86e83-a0a1-4bf8-958e-e154e91f47ef/coredns/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=c6e37cda-09d0-4812-958b-189bae58770e name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.793884071Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:d42423e8c2a4c3add467f60fe17710b5a0f6c79b2384aa761d1aad5e15519f61,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=afb56c49-f091-499c-a819-5cf0633b63b3 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.793964214Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:d42423e8c2a4c3add467f60fe17710b5a0f6c79b2384aa761d1aad5e15519f61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1702426417374445824,StartedAt:1702426418644089211,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.28.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31cdd67a6e054cf9c0b1601f37db20e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c31cdd67a6e054cf9c0b1601f37db20e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c31cdd67a6e054cf9c0b1601f37db20e/containers/kube-scheduler/d7d9bef0,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-scheduler-embed-certs-335807_c31cdd67a6e054cf9c0b1601f37db20e/kube-scheduler/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=afb56c49-f091-499c-a819-5cf0633b63b3 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.794572557Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ad38ad2ba8d7ef925b8a1713b6636c494e5fc4aae29f70052c75a59a0f5f6503,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=e5eba2d1-1ff1-4d82-81a9-d8badf6f5ee2 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.794675779Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ad38ad2ba8d7ef925b8a1713b6636c494e5fc4aae29f70052c75a59a0f5f6503,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1702426417071016858,StartedAt:1702426418682319822,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.9-0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0824a86eab624ba769ff3e04bee2867a,},Annotations:map[string]string{io.kubernetes.container.hash: f29cbe9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0824a86eab624ba769ff3e04bee2867a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0824a86eab624ba769ff3e04bee2867a/containers/etcd/20e9a6b3,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_etcd-embed-certs-335807_0824a86eab624ba769ff3e04bee2867a/etcd/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=e5eba2d1-1ff1-4d82-81a9-d8badf6f5ee2 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.795383556Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:c402daaf59971db0446bc27bb2a56f520952f33b29a0226962d19dbfa6ab69f9,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=aa40b573-2071-4979-8802-326889531e31 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.795514800Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:c402daaf59971db0446bc27bb2a56f520952f33b29a0226962d19dbfa6ab69f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1702426416891123202,StartedAt:1702426417811984523,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.28.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fdb93043e71a6cbe9511612a78a69a1,},Annotations:map[string]string{io.kubernetes.container.hash: 60ab00ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5fdb93043e71a6cbe9511612a78a69a1/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5fdb93043e71a6cbe9511612a78a69a1/containers/kube-apiserver/02ce2416,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-apiserver-embed-certs-335807_5fdb93043
e71a6cbe9511612a78a69a1/kube-apiserver/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=aa40b573-2071-4979-8802-326889531e31 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.796057550Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b771f8110ea5225d736a60f1faa7ddc9ac8c341202c9bff8c1c82f46d16082c1,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=51274e55-b19a-4f19-981e-706927c966e3 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 13 00:23:05 embed-certs-335807 crio[726]: time="2023-12-13 00:23:05.796144820Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b771f8110ea5225d736a60f1faa7ddc9ac8c341202c9bff8c1c82f46d16082c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1702426416816006064,StartedAt:1702426417714979708,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.28.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb76d93a779cccf3f04273dc3f836d5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/7eb76d93a779cccf3f04273dc3f836d5/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/7eb76d93a779cccf3f04273dc3f836d5/containers/kube-controller-manager/fbcaaf21,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRI
VATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-controller-manager-embed-certs-335807_7eb76d93a779cccf3f04273dc3f836d5/kube-controller-manager/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=51274e55-b19a-4f19-981e-706927c966e3 name=/runtime.v1.RuntimeService/ContainerStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e0d17c42c09c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   2afacfbbbbfe1       storage-provisioner
	339d0782bfacf       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   759ce7bd9ba38       kube-proxy-ccq47
	c8146da064c98       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   1002d62d8148b       coredns-5dd5756b68-gs4kb
	d42423e8c2a4c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   1fd6c600d898c       kube-scheduler-embed-certs-335807
	ad38ad2ba8d7e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   daec354eb5e8a       etcd-embed-certs-335807
	c402daaf59971       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   da1c47eff7179       kube-apiserver-embed-certs-335807
	b771f8110ea52       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   154d2c4d08454       kube-controller-manager-embed-certs-335807
	
	* 
	* ==> coredns [c8146da064c98405a2d2cdcd64c3f2a6e5580e6a9bbfadac2bdfc875edeb7378] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:36328 - 4273 "HINFO IN 8678516472761787121.6070623347578583618. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014356926s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-335807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-335807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=embed-certs-335807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_13T00_13_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Dec 2023 00:13:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-335807
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Dec 2023 00:23:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Dec 2023 00:19:11 +0000   Wed, 13 Dec 2023 00:13:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Dec 2023 00:19:11 +0000   Wed, 13 Dec 2023 00:13:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Dec 2023 00:19:11 +0000   Wed, 13 Dec 2023 00:13:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Dec 2023 00:19:11 +0000   Wed, 13 Dec 2023 00:13:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.249
	  Hostname:    embed-certs-335807
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f86487f5eff8493d8f8c3113884f4708
	  System UUID:                f86487f5-eff8-493d-8f8c-3113884f4708
	  Boot ID:                    4e2e7d95-2434-46bf-b05f-70d0b33de31f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-gs4kb                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-embed-certs-335807                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-embed-certs-335807             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-embed-certs-335807    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-ccq47                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-embed-certs-335807             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-z7qb4               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  Starting                 9m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node embed-certs-335807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node embed-certs-335807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s (x7 over 9m31s)  kubelet          Node embed-certs-335807 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node embed-certs-335807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node embed-certs-335807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node embed-certs-335807 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node embed-certs-335807 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m11s                  kubelet          Node embed-certs-335807 status is now: NodeReady
	  Normal  RegisteredNode           9m10s                  node-controller  Node embed-certs-335807 event: Registered Node embed-certs-335807 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec13 00:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068122] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.371535] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.471790] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.134572] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.400982] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.436867] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.108898] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.141874] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.125633] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.207261] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[ +17.675083] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[Dec13 00:09] kauditd_printk_skb: 34 callbacks suppressed
	[Dec13 00:13] kauditd_printk_skb: 2 callbacks suppressed
	[  +4.996799] systemd-fstab-generator[3704]: Ignoring "noauto" for root device
	[  +9.805914] systemd-fstab-generator[4029]: Ignoring "noauto" for root device
	[Dec13 00:14] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [ad38ad2ba8d7ef925b8a1713b6636c494e5fc4aae29f70052c75a59a0f5f6503] <==
	* {"level":"info","ts":"2023-12-13T00:13:38.932582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7bf18ae696d1660f switched to configuration voters=(8931072259029820943)"}
	{"level":"info","ts":"2023-12-13T00:13:38.932731Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"573ffd3ad1c9e277","local-member-id":"7bf18ae696d1660f","added-peer-id":"7bf18ae696d1660f","added-peer-peer-urls":["https://192.168.61.249:2380"]}
	{"level":"info","ts":"2023-12-13T00:13:38.953041Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-13T00:13:38.953246Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.249:2380"}
	{"level":"info","ts":"2023-12-13T00:13:38.953435Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.249:2380"}
	{"level":"info","ts":"2023-12-13T00:13:38.955117Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7bf18ae696d1660f","initial-advertise-peer-urls":["https://192.168.61.249:2380"],"listen-peer-urls":["https://192.168.61.249:2380"],"advertise-client-urls":["https://192.168.61.249:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.249:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-13T00:13:38.955185Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-13T00:13:39.478063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7bf18ae696d1660f is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-13T00:13:39.478166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7bf18ae696d1660f became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-13T00:13:39.478224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7bf18ae696d1660f received MsgPreVoteResp from 7bf18ae696d1660f at term 1"}
	{"level":"info","ts":"2023-12-13T00:13:39.478261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7bf18ae696d1660f became candidate at term 2"}
	{"level":"info","ts":"2023-12-13T00:13:39.478297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7bf18ae696d1660f received MsgVoteResp from 7bf18ae696d1660f at term 2"}
	{"level":"info","ts":"2023-12-13T00:13:39.478325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7bf18ae696d1660f became leader at term 2"}
	{"level":"info","ts":"2023-12-13T00:13:39.47835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7bf18ae696d1660f elected leader 7bf18ae696d1660f at term 2"}
	{"level":"info","ts":"2023-12-13T00:13:39.479661Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:13:39.481064Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7bf18ae696d1660f","local-member-attributes":"{Name:embed-certs-335807 ClientURLs:[https://192.168.61.249:2379]}","request-path":"/0/members/7bf18ae696d1660f/attributes","cluster-id":"573ffd3ad1c9e277","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-13T00:13:39.481224Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-13T00:13:39.481884Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"573ffd3ad1c9e277","local-member-id":"7bf18ae696d1660f","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:13:39.481992Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:13:39.482044Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:13:39.482717Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.249:2379"}
	{"level":"info","ts":"2023-12-13T00:13:39.483128Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-13T00:13:39.483961Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-13T00:13:39.485241Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-13T00:13:39.485285Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:23:06 up 14 min,  0 users,  load average: 0.88, 0.41, 0.24
	Linux embed-certs-335807 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c402daaf59971db0446bc27bb2a56f520952f33b29a0226962d19dbfa6ab69f9] <==
	* W1213 00:18:42.456857       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:18:42.456911       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:18:42.456925       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:18:42.457047       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:18:42.457127       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:18:42.458300       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:19:41.332083       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1213 00:19:42.457921       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:19:42.458048       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:19:42.458079       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:19:42.459137       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:19:42.459246       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:19:42.459288       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:20:41.331610       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1213 00:21:41.332065       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1213 00:21:42.459209       1 handler_proxy.go:93] no RequestInfo found in the context
	W1213 00:21:42.459407       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:21:42.459428       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:21:42.459537       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1213 00:21:42.459629       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:21:42.461370       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:22:41.331939       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [b771f8110ea5225d736a60f1faa7ddc9ac8c341202c9bff8c1c82f46d16082c1] <==
	* I1213 00:17:29.139362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="117.975µs"
	E1213 00:17:56.373566       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:17:56.966168       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:18:26.379606       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:18:26.976423       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:18:56.386955       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:18:56.986992       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:19:26.393012       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:19:26.996367       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:19:56.399497       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:19:57.006494       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1213 00:20:09.135658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="294.773µs"
	I1213 00:20:22.129281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="126.229µs"
	E1213 00:20:26.405213       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:20:27.015216       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:20:56.412710       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:20:57.024354       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:21:26.419624       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:21:27.034247       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:21:56.425550       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:21:57.043550       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:22:26.431101       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:22:27.052277       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:22:56.437669       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:22:57.068077       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [339d0782bfacf9b197014c96754a0f59569fcfb63fce1bb4f90bed7d66518056] <==
	* I1213 00:14:01.734610       1 server_others.go:69] "Using iptables proxy"
	I1213 00:14:01.767124       1 node.go:141] Successfully retrieved node IP: 192.168.61.249
	I1213 00:14:01.911911       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1213 00:14:01.912016       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 00:14:01.916869       1 server_others.go:152] "Using iptables Proxier"
	I1213 00:14:01.917575       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 00:14:01.917839       1 server.go:846] "Version info" version="v1.28.4"
	I1213 00:14:01.918463       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 00:14:01.922527       1 config.go:188] "Starting service config controller"
	I1213 00:14:01.922683       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 00:14:01.923217       1 config.go:97] "Starting endpoint slice config controller"
	I1213 00:14:01.923531       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 00:14:01.924146       1 config.go:315] "Starting node config controller"
	I1213 00:14:01.924198       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 00:14:02.023426       1 shared_informer.go:318] Caches are synced for service config
	I1213 00:14:02.023750       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1213 00:14:02.024459       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d42423e8c2a4c3add467f60fe17710b5a0f6c79b2384aa761d1aad5e15519f61] <==
	* E1213 00:13:41.488379       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 00:13:41.488385       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 00:13:41.488391       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 00:13:41.488398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1213 00:13:42.295553       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 00:13:42.295669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1213 00:13:42.395671       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1213 00:13:42.396157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1213 00:13:42.403128       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 00:13:42.403180       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1213 00:13:42.529386       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 00:13:42.529433       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 00:13:42.566646       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 00:13:42.566696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1213 00:13:42.688998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 00:13:42.689049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1213 00:13:42.740856       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 00:13:42.740907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1213 00:13:42.767127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1213 00:13:42.767183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1213 00:13:42.787559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 00:13:42.787682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1213 00:13:42.790624       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 00:13:42.790725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1213 00:13:44.967154       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-13 00:08:25 UTC, ends at Wed 2023-12-13 00:23:06 UTC. --
	Dec 13 00:20:22 embed-certs-335807 kubelet[4036]: E1213 00:20:22.114576    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:20:34 embed-certs-335807 kubelet[4036]: E1213 00:20:34.113835    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:20:45 embed-certs-335807 kubelet[4036]: E1213 00:20:45.117669    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:20:45 embed-certs-335807 kubelet[4036]: E1213 00:20:45.195723    4036 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:20:45 embed-certs-335807 kubelet[4036]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:20:45 embed-certs-335807 kubelet[4036]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:20:45 embed-certs-335807 kubelet[4036]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:20:58 embed-certs-335807 kubelet[4036]: E1213 00:20:58.113940    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:21:10 embed-certs-335807 kubelet[4036]: E1213 00:21:10.115053    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:21:25 embed-certs-335807 kubelet[4036]: E1213 00:21:25.115004    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:21:40 embed-certs-335807 kubelet[4036]: E1213 00:21:40.114352    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:21:45 embed-certs-335807 kubelet[4036]: E1213 00:21:45.199440    4036 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:21:45 embed-certs-335807 kubelet[4036]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:21:45 embed-certs-335807 kubelet[4036]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:21:45 embed-certs-335807 kubelet[4036]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:21:53 embed-certs-335807 kubelet[4036]: E1213 00:21:53.114583    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:22:08 embed-certs-335807 kubelet[4036]: E1213 00:22:08.115163    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:22:22 embed-certs-335807 kubelet[4036]: E1213 00:22:22.114532    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:22:35 embed-certs-335807 kubelet[4036]: E1213 00:22:35.114942    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:22:45 embed-certs-335807 kubelet[4036]: E1213 00:22:45.195108    4036 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:22:45 embed-certs-335807 kubelet[4036]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:22:45 embed-certs-335807 kubelet[4036]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:22:45 embed-certs-335807 kubelet[4036]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:22:46 embed-certs-335807 kubelet[4036]: E1213 00:22:46.114468    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:23:00 embed-certs-335807 kubelet[4036]: E1213 00:23:00.115207    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	
	* 
	* ==> storage-provisioner [e0d17c42c09c56f857947503ff059c73b2692795fd48cd37dede5099ef99bd8c] <==
	* I1213 00:14:01.817023       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 00:14:01.828214       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 00:14:01.828307       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 00:14:01.873673       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 00:14:01.874059       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-335807_a72f11cb-1fd3-4017-b701-43bd84a93d17!
	I1213 00:14:01.876897       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a843e854-866a-4e87-b1b9-076260b696c7", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-335807_a72f11cb-1fd3-4017-b701-43bd84a93d17 became leader
	I1213 00:14:01.975483       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-335807_a72f11cb-1fd3-4017-b701-43bd84a93d17!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-335807 -n embed-certs-335807
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-335807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-z7qb4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-335807 describe pod metrics-server-57f55c9bc5-z7qb4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-335807 describe pod metrics-server-57f55c9bc5-z7qb4: exit status 1 (68.316894ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-z7qb4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-335807 describe pod metrics-server-57f55c9bc5-z7qb4: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1213 00:14:27.616695  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-743278 -n default-k8s-diff-port-743278
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-13 00:23:14.675245171 +0000 UTC m=+5318.231382715
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-743278 -n default-k8s-diff-port-743278
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-743278 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-743278 logs -n 25: (1.682217452s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-380248                              | cert-expiration-380248       | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p pause-042245                                        | pause-042245                 | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343019 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	|         | disable-driver-mounts-343019                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:01 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-508612        | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-335807            | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143586             | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-743278  | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC | 13 Dec 23 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC |                     |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-508612             | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC | 13 Dec 23 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-335807                 | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143586                  | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-743278       | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/13 00:04:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 00:04:40.034430  177409 out.go:296] Setting OutFile to fd 1 ...
	I1213 00:04:40.034592  177409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:04:40.034601  177409 out.go:309] Setting ErrFile to fd 2...
	I1213 00:04:40.034606  177409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:04:40.034805  177409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1213 00:04:40.035357  177409 out.go:303] Setting JSON to false
	I1213 00:04:40.036280  177409 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10028,"bootTime":1702415852,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 00:04:40.036342  177409 start.go:138] virtualization: kvm guest
	I1213 00:04:40.038707  177409 out.go:177] * [default-k8s-diff-port-743278] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 00:04:40.040139  177409 out.go:177]   - MINIKUBE_LOCATION=17777
	I1213 00:04:40.040129  177409 notify.go:220] Checking for updates...
	I1213 00:04:40.041788  177409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 00:04:40.043246  177409 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:04:40.044627  177409 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1213 00:04:40.046091  177409 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 00:04:40.047562  177409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 00:04:40.049427  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:04:40.049930  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:04:40.049979  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:04:40.064447  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44247
	I1213 00:04:40.064825  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:04:40.065333  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:04:40.065352  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:04:40.065686  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:04:40.065850  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:04:40.066092  177409 driver.go:392] Setting default libvirt URI to qemu:///system
	I1213 00:04:40.066357  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:04:40.066389  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:04:40.080217  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38499
	I1213 00:04:40.080643  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:04:40.081072  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:04:40.081098  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:04:40.081436  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:04:40.081622  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:04:40.114108  177409 out.go:177] * Using the kvm2 driver based on existing profile
	I1213 00:04:40.115585  177409 start.go:298] selected driver: kvm2
	I1213 00:04:40.115603  177409 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:04:40.115714  177409 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 00:04:40.116379  177409 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:04:40.116485  177409 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 00:04:40.131964  177409 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1213 00:04:40.132324  177409 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 00:04:40.132392  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:04:40.132405  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:04:40.132416  177409 start_flags.go:323] config:
	{Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-74327
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:04:40.132599  177409 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:04:40.135330  177409 out.go:177] * Starting control plane node default-k8s-diff-port-743278 in cluster default-k8s-diff-port-743278
	I1213 00:04:36.772718  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:39.844694  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:40.136912  177409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:04:40.136959  177409 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1213 00:04:40.136972  177409 cache.go:56] Caching tarball of preloaded images
	I1213 00:04:40.137094  177409 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 00:04:40.137108  177409 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1213 00:04:40.137215  177409 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/config.json ...
	I1213 00:04:40.137413  177409 start.go:365] acquiring machines lock for default-k8s-diff-port-743278: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:04:45.924700  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:48.996768  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:55.076732  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:58.148779  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:04.228721  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:07.300700  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:13.380743  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:16.452690  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:22.532695  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:25.604771  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:31.684681  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:34.756720  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:40.836697  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:43.908711  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:49.988729  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:53.060691  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:59.140737  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:02.212709  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:08.292717  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:11.364746  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:17.444722  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:20.516796  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:26.596650  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:29.668701  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:35.748723  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:38.820688  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:44.900719  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:47.972683  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:54.052708  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:57.124684  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:03.204728  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:06.276720  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:12.356681  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:15.428743  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:21.508696  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:24.580749  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:30.660747  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:33.732746  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:39.812738  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:42.884767  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:48.964744  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:52.036691  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:58.116726  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:01.188638  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:07.268756  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:10.340725  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:13.345031  177122 start.go:369] acquired machines lock for "embed-certs-335807" in 4m2.39512191s
	I1213 00:08:13.345120  177122 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:08:13.345129  177122 fix.go:54] fixHost starting: 
	I1213 00:08:13.345524  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:08:13.345564  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:08:13.360333  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I1213 00:08:13.360832  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:08:13.361366  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:08:13.361390  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:08:13.361769  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:08:13.361941  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:13.362104  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:08:13.363919  177122 fix.go:102] recreateIfNeeded on embed-certs-335807: state=Stopped err=<nil>
	I1213 00:08:13.363938  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	W1213 00:08:13.364125  177122 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:08:13.366077  177122 out.go:177] * Restarting existing kvm2 VM for "embed-certs-335807" ...
	I1213 00:08:13.342763  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:08:13.342804  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:08:13.344878  176813 machine.go:91] provisioned docker machine in 4m37.409041046s
	I1213 00:08:13.344942  176813 fix.go:56] fixHost completed within 4m37.430106775s
	I1213 00:08:13.344949  176813 start.go:83] releasing machines lock for "old-k8s-version-508612", held for 4m37.430132032s
	W1213 00:08:13.344965  176813 start.go:694] error starting host: provision: host is not running
	W1213 00:08:13.345107  176813 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1213 00:08:13.345120  176813 start.go:709] Will try again in 5 seconds ...
	I1213 00:08:13.367310  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Start
	I1213 00:08:13.367451  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring networks are active...
	I1213 00:08:13.368551  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring network default is active
	I1213 00:08:13.368936  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring network mk-embed-certs-335807 is active
	I1213 00:08:13.369290  177122 main.go:141] libmachine: (embed-certs-335807) Getting domain xml...
	I1213 00:08:13.369993  177122 main.go:141] libmachine: (embed-certs-335807) Creating domain...
	I1213 00:08:14.617766  177122 main.go:141] libmachine: (embed-certs-335807) Waiting to get IP...
	I1213 00:08:14.618837  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:14.619186  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:14.619322  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:14.619202  177987 retry.go:31] will retry after 226.757968ms: waiting for machine to come up
	I1213 00:08:14.847619  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:14.847962  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:14.847996  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:14.847892  177987 retry.go:31] will retry after 390.063287ms: waiting for machine to come up
	I1213 00:08:15.239515  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:15.239906  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:15.239939  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:15.239845  177987 retry.go:31] will retry after 341.644988ms: waiting for machine to come up
	I1213 00:08:15.583408  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:15.583848  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:15.583878  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:15.583796  177987 retry.go:31] will retry after 420.722896ms: waiting for machine to come up
	I1213 00:08:18.346616  176813 start.go:365] acquiring machines lock for old-k8s-version-508612: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:08:16.006364  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:16.006767  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:16.006803  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:16.006713  177987 retry.go:31] will retry after 548.041925ms: waiting for machine to come up
	I1213 00:08:16.556444  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:16.556880  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:16.556912  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:16.556833  177987 retry.go:31] will retry after 862.959808ms: waiting for machine to come up
	I1213 00:08:17.421147  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:17.421596  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:17.421630  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:17.421544  177987 retry.go:31] will retry after 1.085782098s: waiting for machine to come up
	I1213 00:08:18.509145  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:18.509595  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:18.509619  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:18.509556  177987 retry.go:31] will retry after 1.303432656s: waiting for machine to come up
	I1213 00:08:19.814985  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:19.815430  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:19.815473  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:19.815367  177987 retry.go:31] will retry after 1.337474429s: waiting for machine to come up
	I1213 00:08:21.154792  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:21.155213  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:21.155236  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:21.155165  177987 retry.go:31] will retry after 2.104406206s: waiting for machine to come up
	I1213 00:08:23.262615  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:23.263144  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:23.263174  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:23.263066  177987 retry.go:31] will retry after 2.064696044s: waiting for machine to come up
	I1213 00:08:25.330105  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:25.330586  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:25.330621  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:25.330544  177987 retry.go:31] will retry after 2.270537288s: waiting for machine to come up
	I1213 00:08:27.602267  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:27.602787  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:27.602810  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:27.602758  177987 retry.go:31] will retry after 3.020844169s: waiting for machine to come up
	I1213 00:08:30.626232  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:30.626696  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:30.626731  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:30.626645  177987 retry.go:31] will retry after 5.329279261s: waiting for machine to come up
	I1213 00:08:37.405257  177307 start.go:369] acquired machines lock for "no-preload-143586" in 4m8.02482326s
	I1213 00:08:37.405329  177307 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:08:37.405340  177307 fix.go:54] fixHost starting: 
	I1213 00:08:37.405777  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:08:37.405830  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:08:37.422055  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41433
	I1213 00:08:37.422558  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:08:37.423112  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:08:37.423143  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:08:37.423462  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:08:37.423650  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:08:37.423795  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:08:37.425302  177307 fix.go:102] recreateIfNeeded on no-preload-143586: state=Stopped err=<nil>
	I1213 00:08:37.425345  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	W1213 00:08:37.425519  177307 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:08:37.428723  177307 out.go:177] * Restarting existing kvm2 VM for "no-preload-143586" ...
	I1213 00:08:35.958579  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.959166  177122 main.go:141] libmachine: (embed-certs-335807) Found IP for machine: 192.168.61.249
	I1213 00:08:35.959200  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has current primary IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.959212  177122 main.go:141] libmachine: (embed-certs-335807) Reserving static IP address...
	I1213 00:08:35.959676  177122 main.go:141] libmachine: (embed-certs-335807) Reserved static IP address: 192.168.61.249
	I1213 00:08:35.959731  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "embed-certs-335807", mac: "52:54:00:20:1b:c0", ip: "192.168.61.249"} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:35.959746  177122 main.go:141] libmachine: (embed-certs-335807) Waiting for SSH to be available...
	I1213 00:08:35.959779  177122 main.go:141] libmachine: (embed-certs-335807) DBG | skip adding static IP to network mk-embed-certs-335807 - found existing host DHCP lease matching {name: "embed-certs-335807", mac: "52:54:00:20:1b:c0", ip: "192.168.61.249"}
	I1213 00:08:35.959795  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Getting to WaitForSSH function...
	I1213 00:08:35.962033  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.962419  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:35.962448  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.962552  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Using SSH client type: external
	I1213 00:08:35.962575  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa (-rw-------)
	I1213 00:08:35.962608  177122 main.go:141] libmachine: (embed-certs-335807) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:08:35.962624  177122 main.go:141] libmachine: (embed-certs-335807) DBG | About to run SSH command:
	I1213 00:08:35.962637  177122 main.go:141] libmachine: (embed-certs-335807) DBG | exit 0
	I1213 00:08:36.056268  177122 main.go:141] libmachine: (embed-certs-335807) DBG | SSH cmd err, output: <nil>: 
	I1213 00:08:36.056649  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetConfigRaw
	I1213 00:08:36.057283  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:36.060244  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.060656  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.060705  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.060930  177122 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/config.json ...
	I1213 00:08:36.061132  177122 machine.go:88] provisioning docker machine ...
	I1213 00:08:36.061150  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:36.061386  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.061569  177122 buildroot.go:166] provisioning hostname "embed-certs-335807"
	I1213 00:08:36.061593  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.061737  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.063997  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.064352  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.064374  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.064532  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.064743  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.064899  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.065039  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.065186  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.065556  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.065575  177122 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-335807 && echo "embed-certs-335807" | sudo tee /etc/hostname
	I1213 00:08:36.199697  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-335807
	
	I1213 00:08:36.199733  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.202879  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.203289  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.203312  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.203495  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.203705  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.203845  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.203968  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.204141  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.204545  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.204564  177122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-335807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-335807/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-335807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:08:36.336285  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:08:36.336315  177122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:08:36.336337  177122 buildroot.go:174] setting up certificates
	I1213 00:08:36.336350  177122 provision.go:83] configureAuth start
	I1213 00:08:36.336364  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.336658  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:36.339327  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.339695  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.339727  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.339861  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.342106  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.342485  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.342506  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.342613  177122 provision.go:138] copyHostCerts
	I1213 00:08:36.342699  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:08:36.342711  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:08:36.342795  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:08:36.342910  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:08:36.342928  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:08:36.342962  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:08:36.343051  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:08:36.343061  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:08:36.343099  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:08:36.343185  177122 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.embed-certs-335807 san=[192.168.61.249 192.168.61.249 localhost 127.0.0.1 minikube embed-certs-335807]
	I1213 00:08:36.680595  177122 provision.go:172] copyRemoteCerts
	I1213 00:08:36.680687  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:08:36.680715  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.683411  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.683664  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.683690  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.683826  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.684044  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.684217  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.684370  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:36.773978  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:08:36.795530  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1213 00:08:36.817104  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 00:08:36.838510  177122 provision.go:86] duration metric: configureAuth took 502.141764ms
	I1213 00:08:36.838544  177122 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:08:36.838741  177122 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:08:36.838818  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.841372  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.841720  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.841759  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.841875  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.842095  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.842276  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.842447  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.842593  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.843043  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.843069  177122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:08:37.150317  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:08:37.150364  177122 machine.go:91] provisioned docker machine in 1.089215763s
	I1213 00:08:37.150378  177122 start.go:300] post-start starting for "embed-certs-335807" (driver="kvm2")
	I1213 00:08:37.150391  177122 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:08:37.150424  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.150800  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:08:37.150829  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.153552  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.153920  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.153958  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.154075  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.154268  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.154406  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.154562  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.245839  177122 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:08:37.249929  177122 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:08:37.249959  177122 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:08:37.250029  177122 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:08:37.250114  177122 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:08:37.250202  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:08:37.258062  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:08:37.280034  177122 start.go:303] post-start completed in 129.642247ms
	I1213 00:08:37.280060  177122 fix.go:56] fixHost completed within 23.934930358s
	I1213 00:08:37.280085  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.282572  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.282861  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.282903  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.283059  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.283333  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.283516  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.283694  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.283898  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:37.284217  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:37.284229  177122 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:08:37.405050  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426117.378231894
	
	I1213 00:08:37.405077  177122 fix.go:206] guest clock: 1702426117.378231894
	I1213 00:08:37.405099  177122 fix.go:219] Guest: 2023-12-13 00:08:37.378231894 +0000 UTC Remote: 2023-12-13 00:08:37.280064166 +0000 UTC m=+266.483341520 (delta=98.167728ms)
	I1213 00:08:37.405127  177122 fix.go:190] guest clock delta is within tolerance: 98.167728ms
	I1213 00:08:37.405137  177122 start.go:83] releasing machines lock for "embed-certs-335807", held for 24.060057368s
	I1213 00:08:37.405161  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.405417  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:37.408128  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.408513  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.408559  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.408681  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409264  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409449  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409542  177122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:08:37.409611  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.409647  177122 ssh_runner.go:195] Run: cat /version.json
	I1213 00:08:37.409673  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.412390  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412733  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.412764  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412795  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412910  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.413104  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.413187  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.413224  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.413263  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.413462  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.413455  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.413633  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.413758  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.413899  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.531948  177122 ssh_runner.go:195] Run: systemctl --version
	I1213 00:08:37.537555  177122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:08:37.677429  177122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:08:37.684043  177122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:08:37.684115  177122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:08:37.702304  177122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:08:37.702327  177122 start.go:475] detecting cgroup driver to use...
	I1213 00:08:37.702388  177122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:08:37.716601  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:08:37.728516  177122 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:08:37.728571  177122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:08:37.740595  177122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:08:37.753166  177122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:08:37.853095  177122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:08:37.970696  177122 docker.go:219] disabling docker service ...
	I1213 00:08:37.970769  177122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:08:37.983625  177122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:08:37.994924  177122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:08:38.110057  177122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:08:38.229587  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:08:38.243052  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:08:38.260480  177122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:08:38.260547  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.269442  177122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:08:38.269508  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.278569  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.287680  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.296798  177122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:08:38.306247  177122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:08:38.314189  177122 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:08:38.314251  177122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:08:38.326702  177122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:08:38.335111  177122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:08:38.435024  177122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:08:38.600232  177122 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:08:38.600322  177122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:08:38.606384  177122 start.go:543] Will wait 60s for crictl version
	I1213 00:08:38.606446  177122 ssh_runner.go:195] Run: which crictl
	I1213 00:08:38.611180  177122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:08:38.654091  177122 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:08:38.654197  177122 ssh_runner.go:195] Run: crio --version
	I1213 00:08:38.705615  177122 ssh_runner.go:195] Run: crio --version
	I1213 00:08:38.755387  177122 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1213 00:08:37.430037  177307 main.go:141] libmachine: (no-preload-143586) Calling .Start
	I1213 00:08:37.430266  177307 main.go:141] libmachine: (no-preload-143586) Ensuring networks are active...
	I1213 00:08:37.430931  177307 main.go:141] libmachine: (no-preload-143586) Ensuring network default is active
	I1213 00:08:37.431290  177307 main.go:141] libmachine: (no-preload-143586) Ensuring network mk-no-preload-143586 is active
	I1213 00:08:37.431640  177307 main.go:141] libmachine: (no-preload-143586) Getting domain xml...
	I1213 00:08:37.432281  177307 main.go:141] libmachine: (no-preload-143586) Creating domain...
	I1213 00:08:38.686491  177307 main.go:141] libmachine: (no-preload-143586) Waiting to get IP...
	I1213 00:08:38.687472  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:38.688010  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:38.688095  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:38.687986  178111 retry.go:31] will retry after 246.453996ms: waiting for machine to come up
	I1213 00:08:38.936453  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:38.936931  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:38.936963  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:38.936879  178111 retry.go:31] will retry after 317.431088ms: waiting for machine to come up
	I1213 00:08:39.256641  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:39.257217  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:39.257241  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:39.257165  178111 retry.go:31] will retry after 379.635912ms: waiting for machine to come up
	I1213 00:08:38.757019  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:38.760125  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:38.760684  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:38.760720  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:38.760949  177122 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1213 00:08:38.765450  177122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:08:38.778459  177122 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:08:38.778539  177122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:08:38.819215  177122 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1213 00:08:38.819281  177122 ssh_runner.go:195] Run: which lz4
	I1213 00:08:38.823481  177122 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:08:38.829034  177122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:08:38.829069  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1213 00:08:40.721922  177122 crio.go:444] Took 1.898469 seconds to copy over tarball
	I1213 00:08:40.721984  177122 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:08:39.638611  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:39.639108  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:39.639137  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:39.639067  178111 retry.go:31] will retry after 596.16391ms: waiting for machine to come up
	I1213 00:08:40.237504  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:40.237957  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:40.237990  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:40.237911  178111 retry.go:31] will retry after 761.995315ms: waiting for machine to come up
	I1213 00:08:41.002003  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:41.002388  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:41.002413  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:41.002329  178111 retry.go:31] will retry after 693.578882ms: waiting for machine to come up
	I1213 00:08:41.697126  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:41.697617  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:41.697652  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:41.697555  178111 retry.go:31] will retry after 1.050437275s: waiting for machine to come up
	I1213 00:08:42.749227  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:42.749833  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:42.749866  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:42.749782  178111 retry.go:31] will retry after 1.175916736s: waiting for machine to come up
	I1213 00:08:43.927564  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:43.928115  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:43.928144  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:43.928065  178111 retry.go:31] will retry after 1.590924957s: waiting for machine to come up
	I1213 00:08:43.767138  177122 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.045121634s)
	I1213 00:08:43.767169  177122 crio.go:451] Took 3.045224 seconds to extract the tarball
	I1213 00:08:43.767178  177122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:08:43.809047  177122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:08:43.873704  177122 crio.go:496] all images are preloaded for cri-o runtime.
	I1213 00:08:43.873726  177122 cache_images.go:84] Images are preloaded, skipping loading
	I1213 00:08:43.873792  177122 ssh_runner.go:195] Run: crio config
	I1213 00:08:43.941716  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:08:43.941747  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:08:43.941774  177122 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:08:43.941800  177122 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.249 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-335807 NodeName:embed-certs-335807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:08:43.942026  177122 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-335807"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:08:43.942123  177122 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-335807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-335807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:08:43.942201  177122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1213 00:08:43.951461  177122 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:08:43.951550  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:08:43.960491  177122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1213 00:08:43.976763  177122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:08:43.993725  177122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1213 00:08:44.010795  177122 ssh_runner.go:195] Run: grep 192.168.61.249	control-plane.minikube.internal$ /etc/hosts
	I1213 00:08:44.014668  177122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.249	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:08:44.027339  177122 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807 for IP: 192.168.61.249
	I1213 00:08:44.027376  177122 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:08:44.027550  177122 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:08:44.027617  177122 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:08:44.027701  177122 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/client.key
	I1213 00:08:44.027786  177122 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.key.ba34ddd8
	I1213 00:08:44.027844  177122 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.key
	I1213 00:08:44.027987  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:08:44.028035  177122 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:08:44.028056  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:08:44.028088  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:08:44.028129  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:08:44.028158  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:08:44.028220  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:08:44.029033  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:08:44.054023  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 00:08:44.078293  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:08:44.102083  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 00:08:44.126293  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:08:44.149409  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:08:44.172887  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:08:44.195662  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:08:44.218979  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:08:44.241598  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:08:44.265251  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:08:44.290073  177122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:08:44.306685  177122 ssh_runner.go:195] Run: openssl version
	I1213 00:08:44.312422  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:08:44.322405  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.327215  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.327296  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.333427  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:08:44.343574  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:08:44.353981  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.358997  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.359051  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.364654  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:08:44.375147  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:08:44.384900  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.389492  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.389553  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.395105  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:08:44.404656  177122 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:08:44.409852  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:08:44.415755  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:08:44.421911  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:08:44.428119  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:08:44.435646  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:08:44.441692  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:08:44.447849  177122 kubeadm.go:404] StartCluster: {Name:embed-certs-335807 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-335807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.249 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:08:44.447976  177122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:08:44.448025  177122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:08:44.495646  177122 cri.go:89] found id: ""
	I1213 00:08:44.495744  177122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:08:44.506405  177122 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:08:44.506454  177122 kubeadm.go:636] restartCluster start
	I1213 00:08:44.506515  177122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:08:44.516110  177122 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:44.517275  177122 kubeconfig.go:92] found "embed-certs-335807" server: "https://192.168.61.249:8443"
	I1213 00:08:44.519840  177122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:08:44.529214  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:44.529294  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:44.540415  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:44.540447  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:44.540497  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:44.552090  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.052810  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:45.052890  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:45.066300  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.552897  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:45.553031  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:45.564969  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.520191  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:45.520729  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:45.520754  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:45.520662  178111 retry.go:31] will retry after 1.407916355s: waiting for machine to come up
	I1213 00:08:46.930655  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:46.931073  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:46.931138  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:46.930993  178111 retry.go:31] will retry after 2.033169427s: waiting for machine to come up
	I1213 00:08:48.966888  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:48.967318  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:48.967351  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:48.967253  178111 retry.go:31] will retry after 2.277791781s: waiting for machine to come up
	I1213 00:08:46.052915  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:46.053025  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:46.068633  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:46.552208  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:46.552317  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:46.565045  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:47.052533  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:47.052627  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:47.068457  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:47.553040  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:47.553127  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:47.564657  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:48.052228  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:48.052322  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:48.068950  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:48.553171  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:48.553256  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:48.568868  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:49.052389  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:49.052515  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:49.064674  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:49.552894  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:49.553012  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:49.564302  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:50.052843  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:50.052941  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:50.064617  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:50.553231  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:50.553316  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:50.567944  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:51.247665  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:51.248141  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:51.248175  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:51.248098  178111 retry.go:31] will retry after 4.234068925s: waiting for machine to come up
	I1213 00:08:51.052574  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:51.052700  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:51.069491  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:51.553152  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:51.553234  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:51.565331  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:52.052984  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:52.053064  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:52.064748  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:52.552257  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:52.552362  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:52.563626  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:53.053196  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:53.053287  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:53.064273  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:53.552319  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:53.552423  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:53.563587  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:54.053227  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:54.053331  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:54.065636  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:54.530249  177122 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:08:54.530301  177122 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:08:54.530330  177122 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:08:54.530424  177122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:08:54.570200  177122 cri.go:89] found id: ""
	I1213 00:08:54.570275  177122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:08:54.586722  177122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:08:54.596240  177122 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:08:54.596313  177122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:08:54.605202  177122 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:08:54.605226  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:54.718619  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:55.483563  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:55.483985  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:55.484024  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:55.483927  178111 retry.go:31] will retry after 5.446962632s: waiting for machine to come up
	I1213 00:08:55.944250  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.225592219s)
	I1213 00:08:55.944282  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.132294  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.214859  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.297313  177122 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:08:56.297421  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:56.315946  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:56.830228  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:57.329695  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:57.830336  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.329610  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.829933  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.853978  177122 api_server.go:72] duration metric: took 2.556667404s to wait for apiserver process to appear ...
	I1213 00:08:58.854013  177122 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:08:58.854054  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:02.161624  177409 start.go:369] acquired machines lock for "default-k8s-diff-port-743278" in 4m22.024178516s
	I1213 00:09:02.161693  177409 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:09:02.161704  177409 fix.go:54] fixHost starting: 
	I1213 00:09:02.162127  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:02.162174  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:02.179045  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41749
	I1213 00:09:02.179554  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:02.180099  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:02.180131  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:02.180461  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:02.180658  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:02.180795  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:02.182459  177409 fix.go:102] recreateIfNeeded on default-k8s-diff-port-743278: state=Stopped err=<nil>
	I1213 00:09:02.182498  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	W1213 00:09:02.182657  177409 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:09:02.184934  177409 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-743278" ...
	I1213 00:09:00.933522  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.934020  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has current primary IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.934046  177307 main.go:141] libmachine: (no-preload-143586) Found IP for machine: 192.168.50.181
	I1213 00:09:00.934058  177307 main.go:141] libmachine: (no-preload-143586) Reserving static IP address...
	I1213 00:09:00.934538  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "no-preload-143586", mac: "52:54:00:4d:da:7b", ip: "192.168.50.181"} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:00.934573  177307 main.go:141] libmachine: (no-preload-143586) DBG | skip adding static IP to network mk-no-preload-143586 - found existing host DHCP lease matching {name: "no-preload-143586", mac: "52:54:00:4d:da:7b", ip: "192.168.50.181"}
	I1213 00:09:00.934592  177307 main.go:141] libmachine: (no-preload-143586) Reserved static IP address: 192.168.50.181
	I1213 00:09:00.934601  177307 main.go:141] libmachine: (no-preload-143586) Waiting for SSH to be available...
	I1213 00:09:00.934610  177307 main.go:141] libmachine: (no-preload-143586) DBG | Getting to WaitForSSH function...
	I1213 00:09:00.936830  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.937236  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:00.937283  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.937399  177307 main.go:141] libmachine: (no-preload-143586) DBG | Using SSH client type: external
	I1213 00:09:00.937421  177307 main.go:141] libmachine: (no-preload-143586) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa (-rw-------)
	I1213 00:09:00.937458  177307 main.go:141] libmachine: (no-preload-143586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:00.937473  177307 main.go:141] libmachine: (no-preload-143586) DBG | About to run SSH command:
	I1213 00:09:00.937485  177307 main.go:141] libmachine: (no-preload-143586) DBG | exit 0
	I1213 00:09:01.024658  177307 main.go:141] libmachine: (no-preload-143586) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:01.024996  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetConfigRaw
	I1213 00:09:01.025611  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:01.028062  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.028471  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.028509  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.028734  177307 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/config.json ...
	I1213 00:09:01.028955  177307 machine.go:88] provisioning docker machine ...
	I1213 00:09:01.028980  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:01.029193  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.029394  177307 buildroot.go:166] provisioning hostname "no-preload-143586"
	I1213 00:09:01.029409  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.029580  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.031949  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.032273  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.032305  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.032413  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.032599  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.032758  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.032877  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.033036  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.033377  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.033395  177307 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-143586 && echo "no-preload-143586" | sudo tee /etc/hostname
	I1213 00:09:01.157420  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-143586
	
	I1213 00:09:01.157461  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.160181  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.160498  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.160535  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.160654  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.160915  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.161104  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.161299  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.161469  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.161785  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.161811  177307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-143586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-143586/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-143586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:01.287746  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:01.287776  177307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:01.287835  177307 buildroot.go:174] setting up certificates
	I1213 00:09:01.287844  177307 provision.go:83] configureAuth start
	I1213 00:09:01.287857  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.288156  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:01.290754  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.291147  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.291179  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.291296  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.293643  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.294002  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.294034  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.294184  177307 provision.go:138] copyHostCerts
	I1213 00:09:01.294243  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:01.294256  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:01.294323  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:01.294441  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:01.294453  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:01.294489  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:01.294569  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:01.294578  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:01.294610  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:01.294683  177307 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.no-preload-143586 san=[192.168.50.181 192.168.50.181 localhost 127.0.0.1 minikube no-preload-143586]
	I1213 00:09:01.407742  177307 provision.go:172] copyRemoteCerts
	I1213 00:09:01.407823  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:01.407856  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.410836  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.411141  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.411170  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.411455  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.411698  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.411883  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.412056  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:01.501782  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:01.530009  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:01.555147  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1213 00:09:01.580479  177307 provision.go:86] duration metric: configureAuth took 292.598329ms
	I1213 00:09:01.580511  177307 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:01.580732  177307 config.go:182] Loaded profile config "no-preload-143586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:09:01.580835  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.583742  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.584241  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.584274  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.584581  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.584798  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.585004  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.585184  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.585429  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.585889  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.585928  177307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:01.909801  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:01.909855  177307 machine.go:91] provisioned docker machine in 880.876025ms
	I1213 00:09:01.909868  177307 start.go:300] post-start starting for "no-preload-143586" (driver="kvm2")
	I1213 00:09:01.909883  177307 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:01.909905  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:01.910311  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:01.910349  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.913247  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.913635  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.913669  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.913824  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.914044  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.914199  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.914349  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.005986  177307 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:02.011294  177307 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:02.011323  177307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:02.011403  177307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:02.011494  177307 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:02.011601  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:02.022942  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:02.044535  177307 start.go:303] post-start completed in 134.650228ms
	I1213 00:09:02.044569  177307 fix.go:56] fixHost completed within 24.639227496s
	I1213 00:09:02.044597  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.047115  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.047543  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.047573  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.047758  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.047986  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.048161  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.048340  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.048500  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:02.048803  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:02.048816  177307 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:02.161458  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426142.108795362
	
	I1213 00:09:02.161485  177307 fix.go:206] guest clock: 1702426142.108795362
	I1213 00:09:02.161496  177307 fix.go:219] Guest: 2023-12-13 00:09:02.108795362 +0000 UTC Remote: 2023-12-13 00:09:02.044573107 +0000 UTC m=+272.815740988 (delta=64.222255ms)
	I1213 00:09:02.161522  177307 fix.go:190] guest clock delta is within tolerance: 64.222255ms
	I1213 00:09:02.161529  177307 start.go:83] releasing machines lock for "no-preload-143586", held for 24.756225075s
	I1213 00:09:02.161560  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.161847  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:02.164980  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.165383  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.165406  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.165582  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166273  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166493  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166576  177307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:02.166621  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.166903  177307 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:02.166931  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.169526  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.169553  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.169895  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.169938  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.169978  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.170000  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.170183  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.170282  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.170344  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.170473  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.170480  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.170603  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.170653  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.170804  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.281372  177307 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:02.288798  177307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:02.432746  177307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:02.441453  177307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:02.441539  177307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:02.456484  177307 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:02.456512  177307 start.go:475] detecting cgroup driver to use...
	I1213 00:09:02.456578  177307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:02.473267  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:02.485137  177307 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:02.485226  177307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:02.497728  177307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:02.510592  177307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:02.657681  177307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:02.791382  177307 docker.go:219] disabling docker service ...
	I1213 00:09:02.791476  177307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:02.804977  177307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:02.817203  177307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:02.927181  177307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:03.037010  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:03.050235  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:03.068944  177307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:09:03.069048  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.078813  177307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:03.078975  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.089064  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.098790  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.109679  177307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:03.120686  177307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:03.128767  177307 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:03.128820  177307 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:03.141210  177307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:03.149602  177307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:03.254618  177307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:03.434005  177307 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:03.434097  177307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:03.440391  177307 start.go:543] Will wait 60s for crictl version
	I1213 00:09:03.440481  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:03.445570  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:03.492155  177307 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:03.492240  177307 ssh_runner.go:195] Run: crio --version
	I1213 00:09:03.549854  177307 ssh_runner.go:195] Run: crio --version
	I1213 00:09:03.605472  177307 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1213 00:09:03.606678  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:03.610326  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:03.610753  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:03.610789  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:03.611019  177307 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:03.616608  177307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:03.632258  177307 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1213 00:09:03.632317  177307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:03.672640  177307 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1213 00:09:03.672666  177307 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 00:09:03.672723  177307 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:03.672772  177307 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.672774  177307 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.672820  177307 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.673002  177307 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1213 00:09:03.673032  177307 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.673038  177307 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.673094  177307 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.674386  177307 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.674433  177307 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1213 00:09:03.674505  177307 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:03.674648  177307 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.674774  177307 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.674822  177307 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.674864  177307 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.675103  177307 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.808980  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.812271  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1213 00:09:03.827742  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.828695  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.831300  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.846041  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.850598  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.908323  177307 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1213 00:09:03.908378  177307 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.908458  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.122878  177307 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1213 00:09:04.122930  177307 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:04.122955  177307 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1213 00:09:04.123115  177307 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:04.123132  177307 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1213 00:09:04.123164  177307 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:04.122988  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123203  177307 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1213 00:09:04.123230  177307 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:04.123245  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:04.123267  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123065  177307 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1213 00:09:04.123304  177307 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:04.123311  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123338  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123201  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.135289  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:04.139046  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:04.206020  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:04.206025  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.206195  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.206291  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:04.206422  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:04.247875  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1213 00:09:04.248003  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:04.248126  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1213 00:09:04.248193  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:02.719708  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:02.719761  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:02.719779  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:02.780571  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:02.780621  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:03.281221  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:03.290375  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:03.290413  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:03.781510  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:03.788285  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:03.788314  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:04.280872  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:04.288043  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 200:
	ok
	I1213 00:09:04.299772  177122 api_server.go:141] control plane version: v1.28.4
	I1213 00:09:04.299808  177122 api_server.go:131] duration metric: took 5.445787793s to wait for apiserver health ...
	I1213 00:09:04.299821  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:09:04.299830  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:04.301759  177122 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:02.186420  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Start
	I1213 00:09:02.186584  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring networks are active...
	I1213 00:09:02.187464  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring network default is active
	I1213 00:09:02.187836  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring network mk-default-k8s-diff-port-743278 is active
	I1213 00:09:02.188238  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Getting domain xml...
	I1213 00:09:02.188979  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Creating domain...
	I1213 00:09:03.516121  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting to get IP...
	I1213 00:09:03.517461  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.518001  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.518058  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:03.517966  178294 retry.go:31] will retry after 198.440266ms: waiting for machine to come up
	I1213 00:09:03.718554  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.718808  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.718846  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:03.718804  178294 retry.go:31] will retry after 319.889216ms: waiting for machine to come up
	I1213 00:09:04.040334  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.040806  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.040956  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:04.040869  178294 retry.go:31] will retry after 465.804275ms: waiting for machine to come up
	I1213 00:09:04.508751  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.509133  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.509237  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:04.509181  178294 retry.go:31] will retry after 609.293222ms: waiting for machine to come up
	I1213 00:09:04.303497  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:04.332773  177122 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:04.373266  177122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:04.384737  177122 system_pods.go:59] 9 kube-system pods found
	I1213 00:09:04.384791  177122 system_pods.go:61] "coredns-5dd5756b68-5vm25" [83fb4b19-82a2-42eb-b4df-6fd838fb8848] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:04.384805  177122 system_pods.go:61] "coredns-5dd5756b68-6mfmr" [e9598d8f-e497-4725-8eca-7fe0e7c2c2f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:04.384820  177122 system_pods.go:61] "etcd-embed-certs-335807" [cf066481-3312-4fce-8e29-e00a0177f188] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:04.384833  177122 system_pods.go:61] "kube-apiserver-embed-certs-335807" [0a545be1-8bb8-425a-889e-5ee1293e0bdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:04.384847  177122 system_pods.go:61] "kube-controller-manager-embed-certs-335807" [fd7ec770-5008-46f9-9f41-122e56baf2e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:04.384862  177122 system_pods.go:61] "kube-proxy-k8n7r" [df8cefdc-7c91-40e6-8949-ba413fd75b28] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:04.384874  177122 system_pods.go:61] "kube-scheduler-embed-certs-335807" [d2431157-640c-49e6-a83d-37cac6be1c50] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:04.384883  177122 system_pods.go:61] "metrics-server-57f55c9bc5-fx5pd" [8aa6fc5a-5649-47b2-a7de-3cabfd1515a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:04.384899  177122 system_pods.go:61] "storage-provisioner" [02026bc0-4e03-4747-ad77-052f2911efe1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:04.384909  177122 system_pods.go:74] duration metric: took 11.614377ms to wait for pod list to return data ...
	I1213 00:09:04.384928  177122 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:04.389533  177122 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:04.389578  177122 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:04.389594  177122 node_conditions.go:105] duration metric: took 4.657548ms to run NodePressure ...
	I1213 00:09:04.389622  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:04.771105  177122 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:04.778853  177122 kubeadm.go:787] kubelet initialised
	I1213 00:09:04.778886  177122 kubeadm.go:788] duration metric: took 7.74816ms waiting for restarted kubelet to initialise ...
	I1213 00:09:04.778898  177122 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:04.795344  177122 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:04.323893  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1213 00:09:04.323901  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1213 00:09:04.324122  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.324168  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.324006  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:04.324031  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:04.324300  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:04.324336  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:04.324067  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1213 00:09:04.324096  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1213 00:09:04.324100  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:04.597566  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:07.626684  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.302476413s)
	I1213 00:09:07.626718  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1213 00:09:07.626754  177307 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:07.626784  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (3.302394961s)
	I1213 00:09:07.626821  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1213 00:09:07.626824  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.302508593s)
	I1213 00:09:07.626859  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1213 00:09:07.626833  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:07.626882  177307 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.029282623s)
	I1213 00:09:07.626755  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.302393062s)
	I1213 00:09:07.626939  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1213 00:09:07.626975  177307 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1213 00:09:07.627010  177307 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:07.627072  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:05.120691  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.121251  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.121285  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:05.121183  178294 retry.go:31] will retry after 488.195845ms: waiting for machine to come up
	I1213 00:09:05.610815  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.611226  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.611258  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:05.611167  178294 retry.go:31] will retry after 705.048097ms: waiting for machine to come up
	I1213 00:09:06.317891  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:06.318329  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:06.318353  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:06.318278  178294 retry.go:31] will retry after 788.420461ms: waiting for machine to come up
	I1213 00:09:07.108179  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:07.108736  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:07.108769  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:07.108696  178294 retry.go:31] will retry after 1.331926651s: waiting for machine to come up
	I1213 00:09:08.442609  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:08.443087  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:08.443114  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:08.443032  178294 retry.go:31] will retry after 1.180541408s: waiting for machine to come up
	I1213 00:09:09.625170  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:09.625722  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:09.625753  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:09.625653  178294 retry.go:31] will retry after 1.866699827s: waiting for machine to come up
	I1213 00:09:06.828008  177122 pod_ready.go:102] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:09.322889  177122 pod_ready.go:102] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:09.822883  177122 pod_ready.go:92] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:09.822913  177122 pod_ready.go:81] duration metric: took 5.027534973s waiting for pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.822927  177122 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.828990  177122 pod_ready.go:92] pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:09.829018  177122 pod_ready.go:81] duration metric: took 6.083345ms waiting for pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.829035  177122 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.803403  177307 ssh_runner.go:235] Completed: which crictl: (2.176302329s)
	I1213 00:09:09.803541  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:09.803468  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.176578633s)
	I1213 00:09:09.803602  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1213 00:09:09.803634  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:09.803673  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:09.851557  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 00:09:09.851690  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:12.107222  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.303514888s)
	I1213 00:09:12.107284  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1213 00:09:12.107292  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.255575693s)
	I1213 00:09:12.107308  177307 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:12.107336  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1213 00:09:12.107363  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:11.494563  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:11.495148  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:11.495182  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:11.495076  178294 retry.go:31] will retry after 2.859065831s: waiting for machine to come up
	I1213 00:09:14.356328  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:14.356778  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:14.356814  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:14.356719  178294 retry.go:31] will retry after 3.506628886s: waiting for machine to come up
	I1213 00:09:11.849447  177122 pod_ready.go:102] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:14.349299  177122 pod_ready.go:102] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:14.853963  177122 pod_ready.go:92] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:14.853989  177122 pod_ready.go:81] duration metric: took 5.024945989s waiting for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.854001  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.861663  177122 pod_ready.go:92] pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:14.861685  177122 pod_ready.go:81] duration metric: took 7.676036ms waiting for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.861697  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:16.223090  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.115697846s)
	I1213 00:09:16.223134  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1213 00:09:16.223165  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:16.223211  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:17.473407  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.25017316s)
	I1213 00:09:17.473435  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1213 00:09:17.473476  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:17.473552  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:17.864739  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:17.865213  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:17.865237  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:17.865171  178294 retry.go:31] will retry after 2.94932643s: waiting for machine to come up
	I1213 00:09:16.884215  177122 pod_ready.go:102] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:17.383872  177122 pod_ready.go:92] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.383906  177122 pod_ready.go:81] duration metric: took 2.52219538s waiting for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.383928  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k8n7r" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.389464  177122 pod_ready.go:92] pod "kube-proxy-k8n7r" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.389482  177122 pod_ready.go:81] duration metric: took 5.547172ms waiting for pod "kube-proxy-k8n7r" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.389490  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.419020  177122 pod_ready.go:92] pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.419047  177122 pod_ready.go:81] duration metric: took 29.549704ms waiting for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.419056  177122 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:19.730210  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:22.069281  176813 start.go:369] acquired machines lock for "old-k8s-version-508612" in 1m3.72259979s
	I1213 00:09:22.069359  176813 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:09:22.069367  176813 fix.go:54] fixHost starting: 
	I1213 00:09:22.069812  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:22.069851  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:22.088760  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I1213 00:09:22.089211  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:22.089766  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:09:22.089795  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:22.090197  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:22.090396  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:22.090574  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:09:22.092039  176813 fix.go:102] recreateIfNeeded on old-k8s-version-508612: state=Stopped err=<nil>
	I1213 00:09:22.092064  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	W1213 00:09:22.092241  176813 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:09:22.094310  176813 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-508612" ...
	I1213 00:09:20.817420  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.817794  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has current primary IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.817833  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Found IP for machine: 192.168.72.144
	I1213 00:09:20.817870  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Reserving static IP address...
	I1213 00:09:20.818250  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-743278", mac: "52:54:00:d1:a8:22", ip: "192.168.72.144"} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.818272  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Reserved static IP address: 192.168.72.144
	I1213 00:09:20.818286  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | skip adding static IP to network mk-default-k8s-diff-port-743278 - found existing host DHCP lease matching {name: "default-k8s-diff-port-743278", mac: "52:54:00:d1:a8:22", ip: "192.168.72.144"}
	I1213 00:09:20.818298  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Getting to WaitForSSH function...
	I1213 00:09:20.818312  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for SSH to be available...
	I1213 00:09:20.820093  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.820378  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.820409  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.820525  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Using SSH client type: external
	I1213 00:09:20.820552  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa (-rw-------)
	I1213 00:09:20.820587  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:20.820618  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | About to run SSH command:
	I1213 00:09:20.820632  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | exit 0
	I1213 00:09:20.907942  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:20.908280  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetConfigRaw
	I1213 00:09:20.909042  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:20.911222  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.911544  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.911569  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.911826  177409 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/config.json ...
	I1213 00:09:20.912048  177409 machine.go:88] provisioning docker machine ...
	I1213 00:09:20.912071  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:20.912284  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:20.912425  177409 buildroot.go:166] provisioning hostname "default-k8s-diff-port-743278"
	I1213 00:09:20.912460  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:20.912585  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:20.914727  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.915081  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.915113  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.915257  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:20.915449  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:20.915562  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:20.915671  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:20.915842  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:20.916275  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:20.916293  177409 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-743278 && echo "default-k8s-diff-port-743278" | sudo tee /etc/hostname
	I1213 00:09:21.042561  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-743278
	
	I1213 00:09:21.042606  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.045461  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.045809  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.045851  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.045957  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.046181  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.046350  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.046508  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.046685  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.047008  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.047034  177409 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-743278' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-743278/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-743278' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:21.169124  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:21.169155  177409 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:21.169175  177409 buildroot.go:174] setting up certificates
	I1213 00:09:21.169185  177409 provision.go:83] configureAuth start
	I1213 00:09:21.169194  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:21.169502  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:21.172929  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.173329  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.173361  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.173540  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.175847  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.176249  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.176277  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.176447  177409 provision.go:138] copyHostCerts
	I1213 00:09:21.176509  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:21.176525  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:21.176584  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:21.176677  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:21.176744  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:21.176775  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:21.176841  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:21.176848  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:21.176866  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:21.176922  177409 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-743278 san=[192.168.72.144 192.168.72.144 localhost 127.0.0.1 minikube default-k8s-diff-port-743278]
	I1213 00:09:21.314924  177409 provision.go:172] copyRemoteCerts
	I1213 00:09:21.315003  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:21.315032  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.318149  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.318550  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.318582  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.318787  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.319005  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.319191  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.319346  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:21.409699  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:21.438626  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1213 00:09:21.468607  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:21.495376  177409 provision.go:86] duration metric: configureAuth took 326.171872ms
	I1213 00:09:21.495403  177409 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:21.495621  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:09:21.495700  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.498778  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.499247  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.499279  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.499495  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.499710  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.499877  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.500098  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.500285  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.500728  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.500751  177409 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:21.822577  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:21.822606  177409 machine.go:91] provisioned docker machine in 910.541774ms
	I1213 00:09:21.822619  177409 start.go:300] post-start starting for "default-k8s-diff-port-743278" (driver="kvm2")
	I1213 00:09:21.822632  177409 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:21.822659  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:21.823015  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:21.823044  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.825948  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.826367  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.826403  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.826577  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.826789  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.826965  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.827146  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:21.915743  177409 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:21.920142  177409 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:21.920169  177409 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:21.920249  177409 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:21.920343  177409 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:21.920474  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:21.929896  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:21.951854  177409 start.go:303] post-start completed in 129.217251ms
	I1213 00:09:21.951880  177409 fix.go:56] fixHost completed within 19.790175647s
	I1213 00:09:21.951904  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.954710  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.955103  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.955137  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.955352  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.955533  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.955685  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.955814  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.955980  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.956485  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.956505  177409 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:22.069059  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426162.011062386
	
	I1213 00:09:22.069089  177409 fix.go:206] guest clock: 1702426162.011062386
	I1213 00:09:22.069100  177409 fix.go:219] Guest: 2023-12-13 00:09:22.011062386 +0000 UTC Remote: 2023-12-13 00:09:21.951884769 +0000 UTC m=+281.971624237 (delta=59.177617ms)
	I1213 00:09:22.069142  177409 fix.go:190] guest clock delta is within tolerance: 59.177617ms
	I1213 00:09:22.069153  177409 start.go:83] releasing machines lock for "default-k8s-diff-port-743278", held for 19.907486915s
	I1213 00:09:22.069191  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.069478  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:22.072371  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.072761  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.072794  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.072922  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073441  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073605  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073670  177409 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:22.073719  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:22.073821  177409 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:22.073841  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:22.076233  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076550  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076703  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.076733  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076874  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:22.077050  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.077080  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.077052  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:22.077227  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:22.077303  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:22.077540  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:22.077630  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:22.077714  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:22.077851  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:22.188131  177409 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:22.193896  177409 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:22.339227  177409 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:22.346292  177409 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:22.346366  177409 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:22.361333  177409 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:22.361364  177409 start.go:475] detecting cgroup driver to use...
	I1213 00:09:22.361438  177409 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:22.374698  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:22.387838  177409 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:22.387897  177409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:22.402969  177409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:22.417038  177409 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:22.533130  177409 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:22.665617  177409 docker.go:219] disabling docker service ...
	I1213 00:09:22.665690  177409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:22.681327  177409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:22.692842  177409 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:22.816253  177409 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:22.951988  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:22.967607  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:22.985092  177409 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:09:22.985158  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:22.994350  177409 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:22.994403  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.003372  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.012176  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.021215  177409 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:23.031105  177409 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:23.039486  177409 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:23.039552  177409 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:23.053085  177409 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:23.062148  177409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:23.182275  177409 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:23.357901  177409 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:23.357991  177409 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:23.364148  177409 start.go:543] Will wait 60s for crictl version
	I1213 00:09:23.364225  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:09:23.368731  177409 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:23.408194  177409 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:23.408288  177409 ssh_runner.go:195] Run: crio --version
	I1213 00:09:23.461483  177409 ssh_runner.go:195] Run: crio --version
	I1213 00:09:23.513553  177409 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1213 00:09:20.148999  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.675412499s)
	I1213 00:09:20.149037  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1213 00:09:20.149073  177307 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:20.149116  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:21.101559  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 00:09:21.101608  177307 cache_images.go:123] Successfully loaded all cached images
	I1213 00:09:21.101615  177307 cache_images.go:92] LoadImages completed in 17.428934706s
	I1213 00:09:21.101694  177307 ssh_runner.go:195] Run: crio config
	I1213 00:09:21.159955  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:09:21.159978  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:21.159999  177307 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:21.160023  177307 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.181 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-143586 NodeName:no-preload-143586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:09:21.160198  177307 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-143586"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:21.160303  177307 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-143586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-143586 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:09:21.160378  177307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1213 00:09:21.170615  177307 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:21.170701  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:21.180228  177307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1213 00:09:21.198579  177307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1213 00:09:21.215096  177307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1213 00:09:21.233288  177307 ssh_runner.go:195] Run: grep 192.168.50.181	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:21.236666  177307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:21.248811  177307 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586 for IP: 192.168.50.181
	I1213 00:09:21.248847  177307 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:21.249007  177307 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:21.249058  177307 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:21.249154  177307 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.key
	I1213 00:09:21.249238  177307 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.key.8f5c2e66
	I1213 00:09:21.249291  177307 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.key
	I1213 00:09:21.249427  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:21.249468  177307 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:21.249484  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:21.249523  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:21.249559  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:21.249591  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:21.249642  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:21.250517  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:21.276697  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:21.299356  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:21.322849  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:21.348145  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:21.370885  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:21.393257  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:21.418643  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:21.446333  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:21.476374  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:21.506662  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:21.530653  177307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:21.555129  177307 ssh_runner.go:195] Run: openssl version
	I1213 00:09:21.561174  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:21.571372  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.575988  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.576053  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.581633  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:21.590564  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:21.599910  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.604113  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.604160  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.609303  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:21.619194  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:21.628171  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.632419  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.632494  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.638310  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:21.648369  177307 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:21.653143  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:21.659543  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:21.665393  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:21.670855  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:21.676290  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:21.681864  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:21.688162  177307 kubeadm.go:404] StartCluster: {Name:no-preload-143586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-143586 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:21.688243  177307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:21.688280  177307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:21.727451  177307 cri.go:89] found id: ""
	I1213 00:09:21.727536  177307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:21.739044  177307 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:21.739066  177307 kubeadm.go:636] restartCluster start
	I1213 00:09:21.739124  177307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:21.747328  177307 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:21.748532  177307 kubeconfig.go:92] found "no-preload-143586" server: "https://192.168.50.181:8443"
	I1213 00:09:21.751029  177307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:21.759501  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:21.759546  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:21.771029  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:21.771048  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:21.771095  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:21.782184  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:22.282507  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:22.282588  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:22.294105  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:22.783207  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:22.783266  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:22.796776  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.282325  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:23.282395  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:23.295974  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.782516  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:23.782615  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:23.797912  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.514911  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:23.517973  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:23.518335  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:23.518366  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:23.518566  177409 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:23.523522  177409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:23.537195  177409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:09:23.537261  177409 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:23.579653  177409 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1213 00:09:23.579729  177409 ssh_runner.go:195] Run: which lz4
	I1213 00:09:23.583956  177409 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:09:23.588686  177409 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:09:23.588720  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1213 00:09:22.095647  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Start
	I1213 00:09:22.095821  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring networks are active...
	I1213 00:09:22.096548  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring network default is active
	I1213 00:09:22.096936  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring network mk-old-k8s-version-508612 is active
	I1213 00:09:22.097366  176813 main.go:141] libmachine: (old-k8s-version-508612) Getting domain xml...
	I1213 00:09:22.097939  176813 main.go:141] libmachine: (old-k8s-version-508612) Creating domain...
	I1213 00:09:23.423128  176813 main.go:141] libmachine: (old-k8s-version-508612) Waiting to get IP...
	I1213 00:09:23.424090  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:23.424606  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:23.424676  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:23.424588  178471 retry.go:31] will retry after 260.416347ms: waiting for machine to come up
	I1213 00:09:23.687268  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:23.687867  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:23.687902  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:23.687814  178471 retry.go:31] will retry after 377.709663ms: waiting for machine to come up
	I1213 00:09:24.067588  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.068249  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.068277  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.068177  178471 retry.go:31] will retry after 480.876363ms: waiting for machine to come up
	I1213 00:09:24.550715  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.551244  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.551278  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.551191  178471 retry.go:31] will retry after 389.885819ms: waiting for machine to come up
	I1213 00:09:24.942898  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.943495  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.943526  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.943443  178471 retry.go:31] will retry after 532.578432ms: waiting for machine to come up
	I1213 00:09:25.478278  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:25.478810  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:25.478845  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:25.478781  178471 retry.go:31] will retry after 599.649827ms: waiting for machine to come up
	I1213 00:09:22.230086  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:24.729105  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:24.282598  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:24.282708  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:24.298151  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:24.782530  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:24.782639  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:24.798661  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.283235  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:25.283393  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:25.297662  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.783319  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:25.783436  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:25.797129  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.282666  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:26.282789  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:26.295674  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.783065  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:26.783147  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:26.794192  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:27.282703  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:27.282775  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:27.294823  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:27.782891  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:27.782975  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:27.798440  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:28.282826  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:28.282909  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:28.293752  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:28.782264  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:28.782325  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:28.793986  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.524765  177409 crio.go:444] Took 1.940853 seconds to copy over tarball
	I1213 00:09:25.524843  177409 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:09:28.426493  177409 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.901618536s)
	I1213 00:09:28.426522  177409 crio.go:451] Took 2.901730 seconds to extract the tarball
	I1213 00:09:28.426533  177409 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:09:28.467170  177409 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:28.520539  177409 crio.go:496] all images are preloaded for cri-o runtime.
	I1213 00:09:28.520567  177409 cache_images.go:84] Images are preloaded, skipping loading
	I1213 00:09:28.520654  177409 ssh_runner.go:195] Run: crio config
	I1213 00:09:28.588320  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:09:28.588348  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:28.588370  177409 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:28.588395  177409 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-743278 NodeName:default-k8s-diff-port-743278 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:09:28.588593  177409 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-743278"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:28.588687  177409 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-743278 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1213 00:09:28.588755  177409 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1213 00:09:28.597912  177409 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:28.597987  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:28.608324  177409 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1213 00:09:28.627102  177409 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:09:28.646837  177409 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1213 00:09:28.664534  177409 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:28.668580  177409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:28.680736  177409 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278 for IP: 192.168.72.144
	I1213 00:09:28.680777  177409 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:28.680971  177409 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:28.681037  177409 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:28.681140  177409 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.key
	I1213 00:09:28.681234  177409 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.key.1dd7f3f2
	I1213 00:09:28.681301  177409 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.key
	I1213 00:09:28.681480  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:28.681525  177409 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:28.681543  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:28.681587  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:28.681636  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:28.681681  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:28.681743  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:28.682710  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:28.707852  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:28.732792  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:28.755545  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:28.779880  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:28.805502  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:28.829894  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:28.853211  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:28.877291  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:28.899870  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:28.922141  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:28.945634  177409 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:28.962737  177409 ssh_runner.go:195] Run: openssl version
	I1213 00:09:28.968869  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:28.980535  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.985219  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.985284  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.990911  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:29.001595  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:29.012408  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.017644  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.017760  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.023914  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:29.034793  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:29.045825  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.050538  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.050584  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.057322  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:29.067993  177409 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:29.072782  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:29.078806  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:29.084744  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:29.090539  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:29.096734  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:29.102729  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:29.108909  177409 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:29.109022  177409 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:29.109095  177409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:29.158003  177409 cri.go:89] found id: ""
	I1213 00:09:29.158100  177409 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:29.169464  177409 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:29.169500  177409 kubeadm.go:636] restartCluster start
	I1213 00:09:29.169555  177409 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:29.180347  177409 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.181609  177409 kubeconfig.go:92] found "default-k8s-diff-port-743278" server: "https://192.168.72.144:8444"
	I1213 00:09:29.184377  177409 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:29.193593  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.193645  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.205447  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.205465  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.205519  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.221169  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.721729  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.721835  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.735942  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.080407  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:26.081034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:26.081061  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:26.080973  178471 retry.go:31] will retry after 1.103545811s: waiting for machine to come up
	I1213 00:09:27.186673  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:27.187208  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:27.187241  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:27.187152  178471 retry.go:31] will retry after 977.151221ms: waiting for machine to come up
	I1213 00:09:28.165799  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:28.166219  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:28.166257  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:28.166166  178471 retry.go:31] will retry after 1.27451971s: waiting for machine to come up
	I1213 00:09:29.441683  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:29.442203  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:29.442240  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:29.442122  178471 retry.go:31] will retry after 1.620883976s: waiting for machine to come up
	I1213 00:09:26.733297  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:29.624623  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:29.282975  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.621544  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.632749  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.783112  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.783214  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.794919  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.282457  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.282528  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.293852  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.782400  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.782499  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.797736  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.282276  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.282367  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.298115  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.759957  177307 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:09:31.760001  177307 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:09:31.760013  177307 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:09:31.760078  177307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:31.799045  177307 cri.go:89] found id: ""
	I1213 00:09:31.799146  177307 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:09:31.813876  177307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:09:31.823305  177307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:09:31.823382  177307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:31.831741  177307 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:31.831767  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:31.961871  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:32.826330  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.045107  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.119065  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.187783  177307 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:09:33.187887  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:33.217142  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:33.735695  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:34.236063  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:30.221906  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.230723  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.243849  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.721380  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.721492  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.734401  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.222026  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.222150  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.235400  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.722107  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.722189  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.735415  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:32.222216  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:32.222365  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:32.238718  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:32.721270  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:32.721389  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:32.735677  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:33.222261  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:33.222329  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:33.243918  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:33.721349  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:33.721438  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:33.738138  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:34.221645  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:34.221748  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:34.238845  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:34.721320  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:34.721390  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:34.738271  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.065065  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:31.065494  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:31.065528  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:31.065436  178471 retry.go:31] will retry after 2.452686957s: waiting for machine to come up
	I1213 00:09:33.519937  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:33.520505  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:33.520537  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:33.520468  178471 retry.go:31] will retry after 2.830999713s: waiting for machine to come up
	I1213 00:09:31.729101  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:34.229143  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:34.735218  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.235570  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.736120  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.764916  177307 api_server.go:72] duration metric: took 2.577131698s to wait for apiserver process to appear ...
	I1213 00:09:35.764942  177307 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:09:35.764971  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.765820  177307 api_server.go:269] stopped: https://192.168.50.181:8443/healthz: Get "https://192.168.50.181:8443/healthz": dial tcp 192.168.50.181:8443: connect: connection refused
	I1213 00:09:35.765860  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.766257  177307 api_server.go:269] stopped: https://192.168.50.181:8443/healthz: Get "https://192.168.50.181:8443/healthz": dial tcp 192.168.50.181:8443: connect: connection refused
	I1213 00:09:36.266842  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.221935  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:35.222069  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:35.240609  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:35.721801  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:35.721965  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:35.765295  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:36.221944  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:36.222021  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:36.238211  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:36.721750  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:36.721830  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:36.736765  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:37.221936  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:37.222185  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:37.238002  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:37.721304  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:37.721385  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:37.734166  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:38.221603  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:38.221701  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:38.235174  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:38.721704  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:38.721794  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:38.735963  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:39.193664  177409 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:09:39.193713  177409 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:09:39.193727  177409 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:09:39.193787  177409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:39.238262  177409 cri.go:89] found id: ""
	I1213 00:09:39.238336  177409 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:09:39.258625  177409 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:09:39.271127  177409 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:09:39.271196  177409 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:39.280870  177409 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:39.280906  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:39.399746  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:36.353967  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:36.354453  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:36.354481  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:36.354415  178471 retry.go:31] will retry after 2.983154328s: waiting for machine to come up
	I1213 00:09:39.341034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:39.341497  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:39.341526  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:39.341462  178471 retry.go:31] will retry after 3.436025657s: waiting for machine to come up
	I1213 00:09:36.230811  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:38.729730  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:40.732654  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:39.693843  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:39.693877  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:39.693896  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:39.767118  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:39.767153  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:39.767169  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:39.787684  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:39.787725  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:40.267069  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:40.272416  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:40.272464  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:40.766651  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:40.799906  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:40.799942  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:41.266411  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:41.271259  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 200:
	ok
	I1213 00:09:41.278691  177307 api_server.go:141] control plane version: v1.29.0-rc.2
	I1213 00:09:41.278715  177307 api_server.go:131] duration metric: took 5.51376527s to wait for apiserver health ...
	I1213 00:09:41.278725  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:09:41.278732  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:41.280473  177307 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:41.281924  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:41.308598  177307 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:41.330367  177307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:41.342017  177307 system_pods.go:59] 8 kube-system pods found
	I1213 00:09:41.342048  177307 system_pods.go:61] "coredns-76f75df574-87nc6" [829c7a44-85a0-4ed0-b98a-b5016aa04b97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:41.342054  177307 system_pods.go:61] "etcd-no-preload-143586" [b50e57af-530a-4689-bf42-a9f74fa6bea1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:41.342065  177307 system_pods.go:61] "kube-apiserver-no-preload-143586" [3aed4b84-e029-433a-8394-f99608b52edd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:41.342071  177307 system_pods.go:61] "kube-controller-manager-no-preload-143586" [f88e182a-0a81-4c7b-b2b3-d6911baf340f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:41.342080  177307 system_pods.go:61] "kube-proxy-8k9x6" [a71d2257-2012-4d0d-948d-d69c0c04bd2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:41.342086  177307 system_pods.go:61] "kube-scheduler-no-preload-143586" [dfb7b176-fbf8-4542-890f-1eba0f699b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:41.342098  177307 system_pods.go:61] "metrics-server-57f55c9bc5-px5lm" [25b8b500-0ad0-4da3-8f7f-d8c46a848e8a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:41.342106  177307 system_pods.go:61] "storage-provisioner" [bb18a95a-ed99-43f7-bc6f-333e0b86cacc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:41.342114  177307 system_pods.go:74] duration metric: took 11.726461ms to wait for pod list to return data ...
	I1213 00:09:41.342132  177307 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:41.345985  177307 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:41.346011  177307 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:41.346021  177307 node_conditions.go:105] duration metric: took 3.884209ms to run NodePressure ...
	I1213 00:09:41.346038  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:41.682789  177307 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:41.690867  177307 kubeadm.go:787] kubelet initialised
	I1213 00:09:41.690892  177307 kubeadm.go:788] duration metric: took 8.076203ms waiting for restarted kubelet to initialise ...
	I1213 00:09:41.690902  177307 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:41.698622  177307 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-87nc6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:43.720619  177307 pod_ready.go:102] pod "coredns-76f75df574-87nc6" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:40.471390  177409 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.071602244s)
	I1213 00:09:40.471425  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.665738  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.786290  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.859198  177409 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:09:40.859302  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:40.887488  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:41.406398  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:41.906653  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:42.405784  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:42.906462  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:43.406489  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:43.432933  177409 api_server.go:72] duration metric: took 2.573735322s to wait for apiserver process to appear ...
	I1213 00:09:43.432975  177409 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:09:43.432997  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:43.433588  177409 api_server.go:269] stopped: https://192.168.72.144:8444/healthz: Get "https://192.168.72.144:8444/healthz": dial tcp 192.168.72.144:8444: connect: connection refused
	I1213 00:09:43.433641  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:43.434089  177409 api_server.go:269] stopped: https://192.168.72.144:8444/healthz: Get "https://192.168.72.144:8444/healthz": dial tcp 192.168.72.144:8444: connect: connection refused
	I1213 00:09:43.934469  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:42.779498  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.779971  176813 main.go:141] libmachine: (old-k8s-version-508612) Found IP for machine: 192.168.39.70
	I1213 00:09:42.779993  176813 main.go:141] libmachine: (old-k8s-version-508612) Reserving static IP address...
	I1213 00:09:42.780011  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has current primary IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.780466  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "old-k8s-version-508612", mac: "52:54:00:dd:da:91", ip: "192.168.39.70"} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.780504  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | skip adding static IP to network mk-old-k8s-version-508612 - found existing host DHCP lease matching {name: "old-k8s-version-508612", mac: "52:54:00:dd:da:91", ip: "192.168.39.70"}
	I1213 00:09:42.780524  176813 main.go:141] libmachine: (old-k8s-version-508612) Reserved static IP address: 192.168.39.70
	I1213 00:09:42.780547  176813 main.go:141] libmachine: (old-k8s-version-508612) Waiting for SSH to be available...
	I1213 00:09:42.780559  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Getting to WaitForSSH function...
	I1213 00:09:42.783019  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.783434  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.783482  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.783566  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Using SSH client type: external
	I1213 00:09:42.783598  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa (-rw-------)
	I1213 00:09:42.783638  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:42.783661  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | About to run SSH command:
	I1213 00:09:42.783681  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | exit 0
	I1213 00:09:42.885148  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:42.885690  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetConfigRaw
	I1213 00:09:42.886388  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:42.889440  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.889898  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.889937  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.890209  176813 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/config.json ...
	I1213 00:09:42.890423  176813 machine.go:88] provisioning docker machine ...
	I1213 00:09:42.890444  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:42.890685  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:42.890874  176813 buildroot.go:166] provisioning hostname "old-k8s-version-508612"
	I1213 00:09:42.890899  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:42.891039  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:42.893678  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.894021  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.894051  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.894174  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:42.894391  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:42.894556  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:42.894720  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:42.894909  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:42.895383  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:42.895406  176813 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-508612 && echo "old-k8s-version-508612" | sudo tee /etc/hostname
	I1213 00:09:43.045290  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-508612
	
	I1213 00:09:43.045345  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.048936  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.049438  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.049476  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.049662  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.049877  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.050074  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.050231  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.050413  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:43.050888  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:43.050919  176813 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-508612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-508612/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-508612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:43.183021  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:43.183061  176813 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:43.183089  176813 buildroot.go:174] setting up certificates
	I1213 00:09:43.183102  176813 provision.go:83] configureAuth start
	I1213 00:09:43.183115  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:43.183467  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:43.186936  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.187409  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.187441  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.187620  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.190125  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.190560  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.190612  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.190775  176813 provision.go:138] copyHostCerts
	I1213 00:09:43.190842  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:43.190861  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:43.190936  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:43.191113  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:43.191126  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:43.191158  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:43.191245  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:43.191256  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:43.191284  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:43.191354  176813 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-508612 san=[192.168.39.70 192.168.39.70 localhost 127.0.0.1 minikube old-k8s-version-508612]
	I1213 00:09:43.321927  176813 provision.go:172] copyRemoteCerts
	I1213 00:09:43.321999  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:43.322039  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.325261  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.325653  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.325686  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.325920  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.326128  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.326300  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.326474  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:43.420656  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:43.445997  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 00:09:43.471466  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:43.500104  176813 provision.go:86] duration metric: configureAuth took 316.983913ms
	I1213 00:09:43.500137  176813 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:43.500380  176813 config.go:182] Loaded profile config "old-k8s-version-508612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1213 00:09:43.500554  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.503567  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.503994  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.504034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.504320  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.504551  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.504797  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.504978  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.505164  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:43.505640  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:43.505668  176813 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:43.859639  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:43.859723  176813 machine.go:91] provisioned docker machine in 969.28446ms
	I1213 00:09:43.859741  176813 start.go:300] post-start starting for "old-k8s-version-508612" (driver="kvm2")
	I1213 00:09:43.859754  176813 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:43.859781  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:43.860174  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:43.860207  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.863407  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.863903  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.863944  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.864142  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.864340  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.864604  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.864907  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:43.957616  176813 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:43.963381  176813 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:43.963413  176813 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:43.963489  176813 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:43.963594  176813 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:43.963710  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:43.972902  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:44.001469  176813 start.go:303] post-start completed in 141.706486ms
	I1213 00:09:44.001503  176813 fix.go:56] fixHost completed within 21.932134773s
	I1213 00:09:44.001532  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.004923  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.005334  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.005410  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.005545  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.005846  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.006067  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.006198  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.006401  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:44.006815  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:44.006841  176813 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:44.134363  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426184.079167065
	
	I1213 00:09:44.134389  176813 fix.go:206] guest clock: 1702426184.079167065
	I1213 00:09:44.134398  176813 fix.go:219] Guest: 2023-12-13 00:09:44.079167065 +0000 UTC Remote: 2023-12-13 00:09:44.001508908 +0000 UTC m=+368.244893563 (delta=77.658157ms)
	I1213 00:09:44.134434  176813 fix.go:190] guest clock delta is within tolerance: 77.658157ms
	I1213 00:09:44.134446  176813 start.go:83] releasing machines lock for "old-k8s-version-508612", held for 22.06510734s
	I1213 00:09:44.134469  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.134760  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:44.137820  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.138245  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.138275  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.138444  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.138957  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.139152  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.139229  176813 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:44.139300  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.139358  176813 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:44.139383  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.142396  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.142920  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.142981  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143041  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143197  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.143340  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.143473  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.143487  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.143505  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143633  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.143628  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:44.143786  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.143913  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.144041  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:44.235010  176813 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:44.263174  176813 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:44.424330  176813 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:44.433495  176813 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:44.433573  176813 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:44.454080  176813 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:44.454106  176813 start.go:475] detecting cgroup driver to use...
	I1213 00:09:44.454173  176813 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:44.482370  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:44.499334  176813 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:44.499429  176813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:44.516413  176813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:44.529636  176813 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:44.638215  176813 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:44.774229  176813 docker.go:219] disabling docker service ...
	I1213 00:09:44.774304  176813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:44.790414  176813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:44.804909  176813 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:44.938205  176813 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:45.069429  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:45.085783  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:45.105487  176813 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1213 00:09:45.105558  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.117662  176813 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:45.117789  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.129560  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.139168  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.148466  176813 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:45.157626  176813 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:45.166608  176813 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:45.166675  176813 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:45.179666  176813 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:45.190356  176813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:45.366019  176813 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:45.549130  176813 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:45.549209  176813 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:45.554753  176813 start.go:543] Will wait 60s for crictl version
	I1213 00:09:45.554809  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:45.559452  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:45.605106  176813 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:45.605180  176813 ssh_runner.go:195] Run: crio --version
	I1213 00:09:45.654428  176813 ssh_runner.go:195] Run: crio --version
	I1213 00:09:45.711107  176813 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1213 00:09:45.712598  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:45.716022  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:45.716371  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:45.716405  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:45.716751  176813 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:45.722339  176813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:45.739528  176813 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1213 00:09:45.739594  176813 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:45.786963  176813 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1213 00:09:45.787044  176813 ssh_runner.go:195] Run: which lz4
	I1213 00:09:45.791462  176813 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:09:45.795923  176813 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:09:45.795952  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1213 00:09:43.228658  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:45.231385  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:45.721999  177307 pod_ready.go:92] pod "coredns-76f75df574-87nc6" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:45.722026  177307 pod_ready.go:81] duration metric: took 4.023377357s waiting for pod "coredns-76f75df574-87nc6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:45.722038  177307 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:47.744891  177307 pod_ready.go:102] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:48.255190  177307 pod_ready.go:92] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:48.255220  177307 pod_ready.go:81] duration metric: took 2.533174326s waiting for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.255233  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.263450  177307 pod_ready.go:92] pod "kube-apiserver-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:48.263477  177307 pod_ready.go:81] duration metric: took 8.236475ms waiting for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.263489  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.212975  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:48.213009  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:48.213033  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.303921  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:48.303963  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:48.435167  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.442421  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:48.442455  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:48.934740  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.941126  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:48.941161  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:49.434967  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:49.444960  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:49.445016  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:49.935234  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:49.941400  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I1213 00:09:49.951057  177409 api_server.go:141] control plane version: v1.28.4
	I1213 00:09:49.951094  177409 api_server.go:131] duration metric: took 6.518109828s to wait for apiserver health ...
	I1213 00:09:49.951105  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:09:49.951115  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:49.953198  177409 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:49.954914  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:49.989291  177409 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:47.527308  176813 crio.go:444] Took 1.735860 seconds to copy over tarball
	I1213 00:09:47.527390  176813 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:09:50.641162  176813 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113740813s)
	I1213 00:09:50.641195  176813 crio.go:451] Took 3.113856 seconds to extract the tarball
	I1213 00:09:50.641208  176813 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:09:50.683194  176813 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:50.729476  176813 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1213 00:09:50.729503  176813 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 00:09:50.729574  176813 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.729602  176813 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1213 00:09:50.729611  176813 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.729617  176813 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:50.729653  176813 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.729605  176813 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.729572  176813 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:50.729589  176813 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.730849  176813 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:50.730908  176813 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.730924  176813 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1213 00:09:50.730968  176813 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.730986  176813 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.730997  176813 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.730847  176813 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.731163  176813 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:47.235674  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:49.728030  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:50.051886  177409 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:50.069774  177409 system_pods.go:59] 8 kube-system pods found
	I1213 00:09:50.069817  177409 system_pods.go:61] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:50.069834  177409 system_pods.go:61] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:50.069849  177409 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:50.069862  177409 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:50.069875  177409 system_pods.go:61] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:50.069887  177409 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:50.069907  177409 system_pods.go:61] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:50.069919  177409 system_pods.go:61] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:50.069932  177409 system_pods.go:74] duration metric: took 18.020213ms to wait for pod list to return data ...
	I1213 00:09:50.069944  177409 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:50.073659  177409 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:50.073688  177409 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:50.073701  177409 node_conditions.go:105] duration metric: took 3.752016ms to run NodePressure ...
	I1213 00:09:50.073722  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:50.545413  177409 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:50.559389  177409 kubeadm.go:787] kubelet initialised
	I1213 00:09:50.559421  177409 kubeadm.go:788] duration metric: took 13.971205ms waiting for restarted kubelet to initialise ...
	I1213 00:09:50.559442  177409 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:50.568069  177409 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.580294  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.580327  177409 pod_ready.go:81] duration metric: took 12.225698ms waiting for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.580340  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.580348  177409 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.588859  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.588893  177409 pod_ready.go:81] duration metric: took 8.526992ms waiting for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.588909  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.588917  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.609726  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.609759  177409 pod_ready.go:81] duration metric: took 20.834011ms waiting for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.609773  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.609781  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.626724  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.626757  177409 pod_ready.go:81] duration metric: took 16.966751ms waiting for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.626770  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.626777  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.950893  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-proxy-zk4wl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.950927  177409 pod_ready.go:81] duration metric: took 324.143266ms waiting for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.950939  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-proxy-zk4wl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.950948  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:51.465200  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:51.465227  177409 pod_ready.go:81] duration metric: took 514.267219ms waiting for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:51.465242  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:51.465251  177409 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:52.111655  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:52.111690  177409 pod_ready.go:81] duration metric: took 646.423162ms waiting for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:52.111707  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:52.111716  177409 pod_ready.go:38] duration metric: took 1.552263211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:52.111735  177409 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:09:52.125125  177409 ops.go:34] apiserver oom_adj: -16
	I1213 00:09:52.125152  177409 kubeadm.go:640] restartCluster took 22.955643397s
	I1213 00:09:52.125175  177409 kubeadm.go:406] StartCluster complete in 23.016262726s
	I1213 00:09:52.125204  177409 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:52.125379  177409 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:09:52.128126  177409 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:52.226763  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:09:52.226947  177409 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:09:52.227030  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:09:52.227060  177409 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227071  177409 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227082  177409 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-743278"
	I1213 00:09:52.227088  177409 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-743278"
	W1213 00:09:52.227092  177409 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:09:52.227115  177409 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227154  177409 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-743278"
	I1213 00:09:52.227165  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	W1213 00:09:52.227173  177409 addons.go:240] addon metrics-server should already be in state true
	I1213 00:09:52.227252  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	I1213 00:09:52.227633  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227667  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.227633  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227698  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.227728  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227794  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.500974  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I1213 00:09:52.501503  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.502103  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.502130  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.502518  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.503096  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.503120  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I1213 00:09:52.503173  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.503249  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I1213 00:09:52.503460  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.503653  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.503952  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.503979  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.504117  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.504137  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.504326  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.504485  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.504680  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.504910  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.504957  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.508425  177409 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-743278"
	W1213 00:09:52.508466  177409 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:09:52.508495  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	I1213 00:09:52.508941  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.508989  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.520593  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35027
	I1213 00:09:52.521055  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37697
	I1213 00:09:52.521104  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.521443  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.521602  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.521630  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.521891  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.521917  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.521956  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.522162  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.522300  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.522506  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.523942  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:52.524208  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I1213 00:09:52.524419  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:52.612780  177409 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:09:52.524612  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.755661  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:09:52.941509  177409 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:52.941551  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:09:53.149407  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:52.881597  177409 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-743278" context rescaled to 1 replicas
	I1213 00:09:53.149472  177409 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:09:53.149496  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:09:52.884700  177409 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1213 00:09:52.756216  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:53.149523  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:53.149532  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:53.149484  177409 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:09:53.150147  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:53.153109  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.153288  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.360880  177409 out.go:177] * Verifying Kubernetes components...
	I1213 00:09:53.153717  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.153952  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.361036  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:50.301405  177307 pod_ready.go:102] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:52.803001  177307 pod_ready.go:102] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:53.361074  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:53.466451  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.361087  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.361322  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.466546  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:09:53.361364  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.361590  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:53.466661  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:53.466906  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.466963  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.467166  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.467266  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.489618  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41785
	I1213 00:09:53.490349  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:53.490932  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:53.490951  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:53.491365  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:53.491579  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:53.494223  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:53.495774  177409 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:09:53.495796  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:09:53.495816  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:53.499620  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.500099  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:53.500124  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.500405  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.500592  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.500734  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.501069  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.667878  177409 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-743278" to be "Ready" ...
	I1213 00:09:53.806167  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:09:53.806194  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:09:53.807837  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:09:53.808402  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:09:53.915171  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:09:53.915199  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:09:53.993146  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:09:53.993172  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:09:54.071008  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:09:50.865405  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.866538  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.867587  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1213 00:09:50.871289  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.872472  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.878541  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.882665  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:50.978405  176813 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1213 00:09:50.978458  176813 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.978527  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.038778  176813 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1213 00:09:51.038824  176813 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:51.038877  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.048868  176813 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1213 00:09:51.048925  176813 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1213 00:09:51.048983  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.054956  176813 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1213 00:09:51.055003  176813 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:51.055045  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.055045  176813 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1213 00:09:51.055133  176813 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:51.055162  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.069915  176813 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1213 00:09:51.069971  176813 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:51.070018  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.073904  176813 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1213 00:09:51.073955  176813 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:51.073990  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:51.074058  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:51.073997  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.074127  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1213 00:09:51.074173  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:51.074270  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1213 00:09:51.076866  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:51.216889  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:51.217032  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1213 00:09:51.217046  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1213 00:09:51.217118  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1213 00:09:51.217157  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1213 00:09:51.217213  176813 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.217804  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1213 00:09:51.217887  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1213 00:09:51.224310  176813 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1213 00:09:51.224329  176813 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.224373  176813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.270398  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1213 00:09:51.651719  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:53.599238  176813 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.374835203s)
	I1213 00:09:53.599269  176813 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1213 00:09:53.599323  176813 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.947557973s)
	I1213 00:09:53.599398  176813 cache_images.go:92] LoadImages completed in 2.869881827s
	W1213 00:09:53.599497  176813 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1213 00:09:53.599587  176813 ssh_runner.go:195] Run: crio config
	I1213 00:09:53.669735  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:09:53.669767  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:53.669792  176813 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:53.669814  176813 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-508612 NodeName:old-k8s-version-508612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1213 00:09:53.669991  176813 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-508612"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-508612
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:53.670076  176813 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-508612 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-508612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:09:53.670138  176813 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1213 00:09:53.680033  176813 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:53.680120  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:53.689595  176813 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1213 00:09:53.707167  176813 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:09:53.726978  176813 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1213 00:09:53.746191  176813 ssh_runner.go:195] Run: grep 192.168.39.70	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:53.750290  176813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:53.763369  176813 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612 for IP: 192.168.39.70
	I1213 00:09:53.763407  176813 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:53.763598  176813 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:53.763671  176813 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:53.763776  176813 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.key
	I1213 00:09:53.763855  176813 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.key.5467de6f
	I1213 00:09:53.763914  176813 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.key
	I1213 00:09:53.764055  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:53.764098  176813 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:53.764115  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:53.764158  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:53.764195  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:53.764238  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:53.764297  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:53.765315  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:53.793100  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:53.821187  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:53.847791  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:53.873741  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:53.903484  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:53.930420  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:53.958706  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:53.986236  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:54.011105  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:54.034546  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:54.070680  176813 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:54.093063  176813 ssh_runner.go:195] Run: openssl version
	I1213 00:09:54.100686  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:54.114647  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.121380  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.121463  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.128895  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:54.142335  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:54.155146  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.159746  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.159817  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.166153  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:54.176190  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:54.187049  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.191667  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.191737  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.197335  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:54.208790  176813 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:54.213230  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:54.219377  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:54.225539  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:54.232970  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:54.240720  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:54.247054  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:54.253486  176813 kubeadm.go:404] StartCluster: {Name:old-k8s-version-508612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-508612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:54.253600  176813 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:54.253674  176813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:54.303024  176813 cri.go:89] found id: ""
	I1213 00:09:54.303102  176813 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:54.317795  176813 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:54.317827  176813 kubeadm.go:636] restartCluster start
	I1213 00:09:54.317884  176813 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:54.331180  176813 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.332572  176813 kubeconfig.go:92] found "old-k8s-version-508612" server: "https://192.168.39.70:8443"
	I1213 00:09:54.335079  176813 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:54.346247  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.346292  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.362692  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.362720  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.362776  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.377570  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.878307  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.878384  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.891159  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:55.377679  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:55.377789  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:55.392860  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:52.229764  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:54.232636  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:55.162034  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.354143542s)
	I1213 00:09:55.162091  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.162103  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.162486  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.162503  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.162519  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.162536  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.162554  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.162887  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.162916  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.162961  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.255531  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.255561  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.255844  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.255867  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.686976  177409 node_ready.go:58] node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:55.814831  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.006392676s)
	I1213 00:09:55.814885  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.814905  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.815237  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.815300  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.815315  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.815325  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.815675  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.815693  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.815721  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.959447  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.88836869s)
	I1213 00:09:55.959502  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.959519  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.959909  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.959931  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.959941  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.959943  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.959950  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.960189  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.960205  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.960223  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.960226  177409 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-743278"
	I1213 00:09:55.962464  177409 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1213 00:09:54.302018  177307 pod_ready.go:92] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.302047  177307 pod_ready.go:81] duration metric: took 6.038549186s waiting for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.302061  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8k9x6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.308192  177307 pod_ready.go:92] pod "kube-proxy-8k9x6" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.308220  177307 pod_ready.go:81] duration metric: took 6.150452ms waiting for pod "kube-proxy-8k9x6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.308233  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.829614  177307 pod_ready.go:92] pod "kube-scheduler-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.829639  177307 pod_ready.go:81] duration metric: took 521.398817ms waiting for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.829649  177307 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:56.842731  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:55.964691  177409 addons.go:502] enable addons completed in 3.737755135s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1213 00:09:58.183398  177409 node_ready.go:58] node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:58.683603  177409 node_ready.go:49] node "default-k8s-diff-port-743278" has status "Ready":"True"
	I1213 00:09:58.683629  177409 node_ready.go:38] duration metric: took 5.01572337s waiting for node "default-k8s-diff-port-743278" to be "Ready" ...
	I1213 00:09:58.683638  177409 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:58.692636  177409 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:58.699084  177409 pod_ready.go:92] pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:58.699103  177409 pod_ready.go:81] duration metric: took 6.437856ms waiting for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:58.699111  177409 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:55.877904  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:55.877977  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:55.893729  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.377737  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:56.377803  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:56.389754  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.878464  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:56.878530  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:56.891849  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:57.377841  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:57.377929  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:57.389962  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:57.878384  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:57.878464  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:57.892518  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:58.378033  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:58.378119  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:58.391780  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:58.878309  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:58.878397  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:58.890677  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:59.378117  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:59.378239  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:59.390695  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:59.878240  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:59.878318  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:59.889688  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:00.378278  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:00.378376  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:00.390756  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.727591  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:58.729633  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:59.343431  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:01.344195  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:03.842943  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:00.718294  177409 pod_ready.go:102] pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:01.216472  177409 pod_ready.go:92] pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.216499  177409 pod_ready.go:81] duration metric: took 2.517381433s waiting for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.216513  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.221993  177409 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.222016  177409 pod_ready.go:81] duration metric: took 5.495703ms waiting for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.222026  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.227513  177409 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.227543  177409 pod_ready.go:81] duration metric: took 5.506889ms waiting for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.227555  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.485096  177409 pod_ready.go:92] pod "kube-proxy-zk4wl" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.485120  177409 pod_ready.go:81] duration metric: took 257.55839ms waiting for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.485131  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.886812  177409 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.886843  177409 pod_ready.go:81] duration metric: took 401.704296ms waiting for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.886860  177409 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:04.192658  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:00.878385  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:00.878514  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:00.891279  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:01.378010  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:01.378120  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:01.389897  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:01.878496  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:01.878581  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:01.890674  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:02.377657  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:02.377767  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:02.389165  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:02.877744  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:02.877886  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:02.889536  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:03.378083  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:03.378206  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:03.390009  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:03.878637  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:03.878757  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:03.891565  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:04.347244  176813 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:10:04.347324  176813 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:10:04.347339  176813 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:10:04.347431  176813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:10:04.391480  176813 cri.go:89] found id: ""
	I1213 00:10:04.391558  176813 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:10:04.407659  176813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:10:04.416545  176813 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:10:04.416616  176813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:10:04.425366  176813 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:10:04.425393  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:04.553907  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:05.643662  176813 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.089700044s)
	I1213 00:10:05.643704  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:01.230857  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:03.728598  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.729292  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.843723  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:07.844549  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:06.193695  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:08.194425  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.881077  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:05.983444  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:06.106543  176813 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:10:06.106637  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:06.120910  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:06.637294  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.137087  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.636989  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.659899  176813 api_server.go:72] duration metric: took 1.5533541s to wait for apiserver process to appear ...
	I1213 00:10:07.659925  176813 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:10:07.659949  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:08.229410  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.729881  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.344919  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.842700  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.692378  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.693810  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.660316  176813 api_server.go:269] stopped: https://192.168.39.70:8443/healthz: Get "https://192.168.39.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 00:10:12.660365  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:13.933418  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:10:13.933452  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:10:14.434114  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:14.442223  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1213 00:10:14.442261  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1213 00:10:14.934425  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:14.941188  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1213 00:10:14.941232  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1213 00:10:15.433614  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:15.441583  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1213 00:10:15.449631  176813 api_server.go:141] control plane version: v1.16.0
	I1213 00:10:15.449656  176813 api_server.go:131] duration metric: took 7.789725712s to wait for apiserver health ...
	I1213 00:10:15.449671  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:10:15.449677  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:10:15.451328  176813 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:10:15.452690  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:10:15.463558  176813 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:10:15.482997  176813 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:10:15.493646  176813 system_pods.go:59] 7 kube-system pods found
	I1213 00:10:15.493674  176813 system_pods.go:61] "coredns-5644d7b6d9-jnhmk" [38a0c948-a47e-4566-ad47-376943787ca1] Running
	I1213 00:10:15.493679  176813 system_pods.go:61] "etcd-old-k8s-version-508612" [80e685b2-cd70-4b7d-b00c-feda3ab1a509] Running
	I1213 00:10:15.493683  176813 system_pods.go:61] "kube-apiserver-old-k8s-version-508612" [657f1d7b-4fcb-44d4-96d3-3cc659fb9f0a] Running
	I1213 00:10:15.493688  176813 system_pods.go:61] "kube-controller-manager-old-k8s-version-508612" [d84a0927-7d19-4bba-8afd-b32877a9aee3] Running
	I1213 00:10:15.493692  176813 system_pods.go:61] "kube-proxy-fpd4j" [f2e9e528-576f-4339-b208-09ee5dbe7fcb] Running
	I1213 00:10:15.493696  176813 system_pods.go:61] "kube-scheduler-old-k8s-version-508612" [ce5ff03a-23bf-4cce-8795-58e412fc841c] Running
	I1213 00:10:15.493699  176813 system_pods.go:61] "storage-provisioner" [98a03a45-0cd3-40b4-9e66-6df14db5a848] Running
	I1213 00:10:15.493706  176813 system_pods.go:74] duration metric: took 10.683423ms to wait for pod list to return data ...
	I1213 00:10:15.493715  176813 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:10:15.498679  176813 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:10:15.498726  176813 node_conditions.go:123] node cpu capacity is 2
	I1213 00:10:15.498742  176813 node_conditions.go:105] duration metric: took 5.021318ms to run NodePressure ...
	I1213 00:10:15.498767  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:15.762302  176813 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:10:15.766665  176813 retry.go:31] will retry after 288.591747ms: kubelet not initialised
	I1213 00:10:13.228878  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.728396  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.343194  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:17.344384  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.193995  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:17.693024  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:19.693723  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:16.063637  176813 retry.go:31] will retry after 250.40677ms: kubelet not initialised
	I1213 00:10:16.320362  176813 retry.go:31] will retry after 283.670967ms: kubelet not initialised
	I1213 00:10:16.610834  176813 retry.go:31] will retry after 810.845397ms: kubelet not initialised
	I1213 00:10:17.427101  176813 retry.go:31] will retry after 1.00058932s: kubelet not initialised
	I1213 00:10:18.498625  176813 retry.go:31] will retry after 2.616819597s: kubelet not initialised
	I1213 00:10:18.226990  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:20.228211  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:19.345330  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:21.843959  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:22.192449  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.193001  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:21.120283  176813 retry.go:31] will retry after 1.883694522s: kubelet not initialised
	I1213 00:10:23.009312  176813 retry.go:31] will retry after 2.899361823s: kubelet not initialised
	I1213 00:10:22.727450  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.729952  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.342673  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:26.343639  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:28.842489  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:26.696279  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:29.194453  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:25.914801  176813 retry.go:31] will retry after 8.466541404s: kubelet not initialised
	I1213 00:10:27.227947  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:29.229430  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:30.843429  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:32.844457  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:31.692122  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:33.694437  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:34.391931  176813 retry.go:31] will retry after 6.686889894s: kubelet not initialised
	I1213 00:10:31.729052  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:33.730399  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:35.344029  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:37.842694  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:36.193427  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:38.193688  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:36.226978  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:38.227307  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.227797  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.343702  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:42.841574  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.693443  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:42.693668  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:41.084957  176813 retry.go:31] will retry after 18.68453817s: kubelet not initialised
	I1213 00:10:42.229433  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:44.728322  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:44.843586  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:46.844269  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:45.192582  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:47.691806  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.692545  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:47.227469  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.228908  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.343743  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.843948  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.694308  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.192629  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.728175  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.226904  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.342077  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:56.343115  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.345031  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:56.193137  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.693873  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:59.777116  176813 kubeadm.go:787] kubelet initialised
	I1213 00:10:59.777150  176813 kubeadm.go:788] duration metric: took 44.014819539s waiting for restarted kubelet to initialise ...
	I1213 00:10:59.777162  176813 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:10:59.782802  176813 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.788307  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.788348  176813 pod_ready.go:81] duration metric: took 5.514049ms waiting for pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.788356  176813 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.792569  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.792588  176813 pod_ready.go:81] duration metric: took 4.224934ms waiting for pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.792599  176813 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.797096  176813 pod_ready.go:92] pod "etcd-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.797119  176813 pod_ready.go:81] duration metric: took 4.508662ms waiting for pod "etcd-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.797130  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.801790  176813 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.801811  176813 pod_ready.go:81] duration metric: took 4.673597ms waiting for pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.801818  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.175474  176813 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.175504  176813 pod_ready.go:81] duration metric: took 373.677737ms waiting for pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.175523  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fpd4j" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.576344  176813 pod_ready.go:92] pod "kube-proxy-fpd4j" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.576373  176813 pod_ready.go:81] duration metric: took 400.842191ms waiting for pod "kube-proxy-fpd4j" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.576387  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:56.229570  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.728770  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:00.843201  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.343182  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:01.199677  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.201427  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:00.976886  176813 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.976908  176813 pod_ready.go:81] duration metric: took 400.512629ms waiting for pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.976920  176813 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:03.283224  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.284030  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:01.229393  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.728570  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.843264  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.343228  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.694505  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.197100  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:07.285680  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:09.786591  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:06.227705  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.229577  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.727791  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.343300  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.843162  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.695161  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:13.195051  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.285865  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:14.785354  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.728656  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:15.227890  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:14.844312  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:16.847144  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:15.692597  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:18.193383  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:17.284986  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.786139  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:17.229608  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.728503  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.344056  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:21.843070  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:23.844051  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:20.692417  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.692912  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.693204  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.285292  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.784342  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.227286  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.228831  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.342758  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:28.347392  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.693376  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:28.696971  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:27.284643  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:29.284776  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.727796  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:29.227690  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:30.843482  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:32.844695  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.191962  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.192585  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.285494  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.285863  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.791234  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.727767  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.728047  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.342092  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:37.342356  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.196354  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:37.693679  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:38.285349  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.785094  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:36.228379  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:38.728361  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.728752  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:39.342944  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:41.343229  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:43.842669  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.192636  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:42.696348  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:43.284960  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.783972  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:42.730357  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.228371  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.844034  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:48.345622  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.199304  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.692399  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.692916  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.784062  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.784533  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.232607  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.727709  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:50.842207  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:52.845393  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:52.193829  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:54.694220  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:51.784671  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:54.284709  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:51.728053  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:53.729081  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:55.342783  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:57.343274  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.694508  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:59.194904  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.285342  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:58.783460  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.227395  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:58.231694  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:00.727822  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:59.343618  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.842326  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.842653  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.197290  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.694223  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.285393  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.784968  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.786110  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:02.728596  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.227456  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.843038  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.342838  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.695124  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.192630  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.284460  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.284768  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:07.728787  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.227036  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.344532  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.841921  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.193483  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.196550  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.693706  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.784036  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.784471  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.227952  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.228178  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.842965  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:17.343683  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:17.193131  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.692561  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:16.785596  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.285058  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:16.726702  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:18.728269  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.843031  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:22.343417  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:22.191869  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:24.193973  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:21.783890  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:23.784341  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:25.784521  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:21.227269  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:23.227691  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:25.228239  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:24.343805  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:26.346354  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:28.844254  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:26.693293  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:29.193583  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:27.784904  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:30.285014  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:27.727045  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:29.728691  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:31.346007  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:33.843421  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:31.194160  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:33.691639  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:32.784701  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:35.284958  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:32.226511  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:34.228892  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:36.342384  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.343546  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:35.694257  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.191620  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:37.286143  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:39.783802  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:36.727306  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.728168  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:40.850557  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.342393  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:40.192328  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:42.192749  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:44.693406  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:41.784411  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.789293  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:41.228591  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.728133  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:45.842401  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:47.843839  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:47.193847  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:49.692840  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:46.284387  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:48.284692  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:50.285419  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:46.228594  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:48.728575  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:50.343073  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:52.843034  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:51.692895  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.196344  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:52.785093  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.785238  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:51.226704  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:53.228359  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:55.228418  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.847060  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.345339  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:56.693854  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.191098  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.285101  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.783955  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.727063  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.727437  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.847179  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:02.343433  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.192388  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.693056  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.784055  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.784840  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.727635  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.727705  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:04.346684  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.843294  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.192928  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.693240  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.284092  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.784303  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:10.784971  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.228019  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.727726  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:09.342622  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:11.343211  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.843894  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:10.698298  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.191387  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.285854  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:15.790625  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:11.228300  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.730143  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:16.343574  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:18.343896  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:15.195797  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:17.694620  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:18.283712  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:20.284937  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:16.227280  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:17.419163  177122 pod_ready.go:81] duration metric: took 4m0.000090271s waiting for pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace to be "Ready" ...
	E1213 00:13:17.419207  177122 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:13:17.419233  177122 pod_ready.go:38] duration metric: took 4m12.64031929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:17.419260  177122 kubeadm.go:640] restartCluster took 4m32.91279931s
	W1213 00:13:17.419346  177122 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:13:17.419387  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:13:20.847802  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:23.342501  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:20.193039  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:22.693730  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:22.285212  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:24.783901  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:25.343029  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:27.842840  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:25.194640  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:27.692515  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:29.695543  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:26.785503  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:29.284618  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:33.603614  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.184189808s)
	I1213 00:13:33.603692  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:13:33.617573  177122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:13:33.626779  177122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:13:33.636160  177122 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:13:33.636214  177122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 00:13:33.694141  177122 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1213 00:13:33.694267  177122 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:13:33.853582  177122 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:13:33.853718  177122 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:13:33.853992  177122 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:13:34.092007  177122 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:13:29.844324  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:32.345926  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.093975  177122 out.go:204]   - Generating certificates and keys ...
	I1213 00:13:34.094125  177122 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:13:34.094198  177122 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:13:34.094297  177122 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:13:34.094492  177122 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:13:34.095287  177122 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:13:34.096041  177122 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:13:34.096841  177122 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:13:34.097551  177122 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:13:34.098399  177122 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:13:34.099122  177122 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:13:34.099844  177122 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:13:34.099929  177122 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:13:34.191305  177122 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:13:34.425778  177122 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:13:34.601958  177122 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:13:34.747536  177122 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:13:34.748230  177122 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:13:34.750840  177122 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:13:32.193239  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.691928  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:31.286291  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:33.786852  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.752409  177122 out.go:204]   - Booting up control plane ...
	I1213 00:13:34.752562  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:13:34.752659  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:13:34.752994  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:13:34.772157  177122 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:13:34.774789  177122 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:13:34.774854  177122 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1213 00:13:34.926546  177122 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:13:34.346782  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.847723  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.694243  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:39.195903  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.284979  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:38.285685  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:40.286174  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:39.345989  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:41.353093  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:43.847024  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:43.435528  177122 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506764 seconds
	I1213 00:13:43.435691  177122 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:13:43.454840  177122 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:13:43.997250  177122 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:13:43.997537  177122 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-335807 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 00:13:44.513097  177122 kubeadm.go:322] [bootstrap-token] Using token: a9yhsz.n5p4z1j5jkbj68ov
	I1213 00:13:44.514695  177122 out.go:204]   - Configuring RBAC rules ...
	I1213 00:13:44.514836  177122 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:13:44.520134  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 00:13:44.528726  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:13:44.535029  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:13:44.539162  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:13:44.545990  177122 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:13:44.561964  177122 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 00:13:44.831402  177122 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:13:44.927500  177122 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:13:44.931294  177122 kubeadm.go:322] 
	I1213 00:13:44.931371  177122 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:13:44.931389  177122 kubeadm.go:322] 
	I1213 00:13:44.931500  177122 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:13:44.931509  177122 kubeadm.go:322] 
	I1213 00:13:44.931535  177122 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:13:44.931605  177122 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:13:44.931674  177122 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:13:44.931681  177122 kubeadm.go:322] 
	I1213 00:13:44.931743  177122 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1213 00:13:44.931752  177122 kubeadm.go:322] 
	I1213 00:13:44.931838  177122 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 00:13:44.931861  177122 kubeadm.go:322] 
	I1213 00:13:44.931938  177122 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:13:44.932026  177122 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:13:44.932139  177122 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:13:44.932151  177122 kubeadm.go:322] 
	I1213 00:13:44.932260  177122 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 00:13:44.932367  177122 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:13:44.932386  177122 kubeadm.go:322] 
	I1213 00:13:44.932533  177122 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token a9yhsz.n5p4z1j5jkbj68ov \
	I1213 00:13:44.932702  177122 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:13:44.932726  177122 kubeadm.go:322] 	--control-plane 
	I1213 00:13:44.932730  177122 kubeadm.go:322] 
	I1213 00:13:44.932797  177122 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:13:44.932808  177122 kubeadm.go:322] 
	I1213 00:13:44.932927  177122 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token a9yhsz.n5p4z1j5jkbj68ov \
	I1213 00:13:44.933074  177122 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:13:44.933953  177122 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:13:44.934004  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:13:44.934026  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:13:44.935893  177122 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:13:41.694337  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.192303  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:42.783865  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.784599  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.937355  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:13:44.961248  177122 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:13:45.005684  177122 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:13:45.005758  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.005789  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=embed-certs-335807 minikube.k8s.io/updated_at=2023_12_13T00_13_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.117205  177122 ops.go:34] apiserver oom_adj: -16
	I1213 00:13:45.402961  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.532503  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:46.343927  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:48.843509  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.197988  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:48.691611  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.785080  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:49.283316  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.138647  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:46.639104  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:47.139139  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:47.638244  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:48.138634  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:48.638352  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:49.138616  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:49.639061  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:50.138633  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:50.639013  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:51.343525  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:53.345044  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:50.693254  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:52.693448  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:51.286352  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:53.782966  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:55.786792  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:51.138430  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:51.638340  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:52.138696  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:52.638727  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:53.138509  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:53.639092  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:54.138153  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:54.638781  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:55.138875  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:55.639166  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:56.138534  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:56.638726  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:57.138427  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:57.273101  177122 kubeadm.go:1088] duration metric: took 12.26741009s to wait for elevateKubeSystemPrivileges.
	I1213 00:13:57.273139  177122 kubeadm.go:406] StartCluster complete in 5m12.825293837s
	I1213 00:13:57.273163  177122 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:13:57.273294  177122 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:13:57.275845  177122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:13:57.276142  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:13:57.276488  177122 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:13:57.276665  177122 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:13:57.276739  177122 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-335807"
	I1213 00:13:57.276756  177122 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-335807"
	W1213 00:13:57.276765  177122 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:13:57.276812  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.277245  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277283  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.277356  177122 addons.go:69] Setting default-storageclass=true in profile "embed-certs-335807"
	I1213 00:13:57.277374  177122 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-335807"
	I1213 00:13:57.277528  177122 addons.go:69] Setting metrics-server=true in profile "embed-certs-335807"
	I1213 00:13:57.277545  177122 addons.go:231] Setting addon metrics-server=true in "embed-certs-335807"
	W1213 00:13:57.277552  177122 addons.go:240] addon metrics-server should already be in state true
	I1213 00:13:57.277599  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.277791  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277820  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.277923  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277945  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.296571  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I1213 00:13:57.299879  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I1213 00:13:57.299897  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I1213 00:13:57.300251  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.300833  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.300906  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.300923  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.300935  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.301294  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.301309  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.301330  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.301419  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.301427  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.301497  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.301728  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.301774  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.302199  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.302232  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.303181  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.303222  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.304586  177122 addons.go:231] Setting addon default-storageclass=true in "embed-certs-335807"
	W1213 00:13:57.304601  177122 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:13:57.304620  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.304860  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.304891  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.323403  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I1213 00:13:57.324103  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.324810  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I1213 00:13:57.324961  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.324985  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.325197  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.325332  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.325518  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.325910  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.325935  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.326524  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.326731  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.328013  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.329895  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.332188  177122 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:13:57.333332  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1213 00:13:57.333375  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:13:57.334952  177122 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:13:57.333392  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:13:57.333795  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.337096  177122 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:13:57.337110  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:13:57.337124  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.337162  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.337564  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.337585  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.339793  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.340514  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.340572  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.340821  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.341606  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.341657  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.341829  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.342023  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.342206  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.342411  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.347105  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.347512  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.347538  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.347782  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.347974  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.348108  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.348213  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.359690  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1213 00:13:57.360385  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.361065  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.361093  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.361567  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.361777  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.363693  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.364020  177122 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:13:57.364037  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:13:57.364056  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.367409  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.367874  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.367904  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.368086  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.368287  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.368470  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.368619  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.399353  177122 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-335807" context rescaled to 1 replicas
	I1213 00:13:57.399391  177122 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.249 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:13:57.401371  177122 out.go:177] * Verifying Kubernetes components...
	I1213 00:13:54.829811  177307 pod_ready.go:81] duration metric: took 4m0.000140793s waiting for pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace to be "Ready" ...
	E1213 00:13:54.829844  177307 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:13:54.829878  177307 pod_ready.go:38] duration metric: took 4m13.138964255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:54.829912  177307 kubeadm.go:640] restartCluster took 4m33.090839538s
	W1213 00:13:54.829977  177307 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:13:54.830014  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:13:55.192745  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:57.193249  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:59.196279  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:57.403699  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:13:57.551632  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:13:57.551656  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:13:57.590132  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:13:57.617477  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:13:57.648290  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:13:57.648324  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:13:57.724394  177122 node_ready.go:35] waiting up to 6m0s for node "embed-certs-335807" to be "Ready" ...
	I1213 00:13:57.724498  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:13:57.751666  177122 node_ready.go:49] node "embed-certs-335807" has status "Ready":"True"
	I1213 00:13:57.751704  177122 node_ready.go:38] duration metric: took 27.274531ms waiting for node "embed-certs-335807" to be "Ready" ...
	I1213 00:13:57.751718  177122 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:57.764283  177122 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace to be "Ready" ...
	I1213 00:13:57.835941  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:13:57.835968  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:13:58.040994  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:13:59.867561  177122 pod_ready.go:102] pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.210713  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.620538044s)
	I1213 00:14:00.210745  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.593229432s)
	I1213 00:14:00.210763  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210775  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210805  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210846  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210892  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.169863052s)
	I1213 00:14:00.210932  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210951  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210803  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.48627637s)
	I1213 00:14:00.211241  177122 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1213 00:14:00.211428  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.211467  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.211477  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.211486  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.211496  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.211804  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.211843  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.211851  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.211860  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.211869  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.211979  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.212025  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.212033  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.212251  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.213205  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213214  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.213221  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213253  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213269  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213287  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.213300  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.213565  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213592  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213600  177122 addons.go:467] Verifying addon metrics-server=true in "embed-certs-335807"
	I1213 00:14:00.213633  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.231892  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.231921  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.232238  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.232257  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.234089  177122 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1213 00:13:58.285584  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.286469  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.235676  177122 addons.go:502] enable addons completed in 2.959016059s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1213 00:14:01.848071  177122 pod_ready.go:92] pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.848093  177122 pod_ready.go:81] duration metric: took 4.083780035s waiting for pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.848101  177122 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.854062  177122 pod_ready.go:92] pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.854082  177122 pod_ready.go:81] duration metric: took 5.975194ms waiting for pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.854090  177122 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.864033  177122 pod_ready.go:92] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.864060  177122 pod_ready.go:81] duration metric: took 9.963384ms waiting for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.864072  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.875960  177122 pod_ready.go:92] pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.875990  177122 pod_ready.go:81] duration metric: took 11.909604ms waiting for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.876004  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.882084  177122 pod_ready.go:92] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.882107  177122 pod_ready.go:81] duration metric: took 6.092978ms waiting for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.882118  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ccq47" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:02.645363  177122 pod_ready.go:92] pod "kube-proxy-ccq47" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:02.645389  177122 pod_ready.go:81] duration metric: took 763.264171ms waiting for pod "kube-proxy-ccq47" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:02.645399  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:03.045476  177122 pod_ready.go:92] pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:03.045502  177122 pod_ready.go:81] duration metric: took 400.097321ms waiting for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:03.045513  177122 pod_ready.go:38] duration metric: took 5.293782674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:03.045530  177122 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:03.045584  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:03.062802  177122 api_server.go:72] duration metric: took 5.663381439s to wait for apiserver process to appear ...
	I1213 00:14:03.062827  177122 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:03.062848  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:14:03.068482  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 200:
	ok
	I1213 00:14:03.069909  177122 api_server.go:141] control plane version: v1.28.4
	I1213 00:14:03.069934  177122 api_server.go:131] duration metric: took 7.099309ms to wait for apiserver health ...
	I1213 00:14:03.069943  177122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:03.248993  177122 system_pods.go:59] 9 kube-system pods found
	I1213 00:14:03.249025  177122 system_pods.go:61] "coredns-5dd5756b68-gs4kb" [d4b86e83-a0a1-4bf8-958e-e154e91f47ef] Running
	I1213 00:14:03.249032  177122 system_pods.go:61] "coredns-5dd5756b68-t92hd" [1ad2dcb3-bcda-42af-b4ce-0d95bba0315f] Running
	I1213 00:14:03.249039  177122 system_pods.go:61] "etcd-embed-certs-335807" [aa5222a7-5670-4550-9d65-6db2095898be] Running
	I1213 00:14:03.249045  177122 system_pods.go:61] "kube-apiserver-embed-certs-335807" [ca0e9de9-8f6a-4bae-b1d1-04b7c0c3cd4c] Running
	I1213 00:14:03.249052  177122 system_pods.go:61] "kube-controller-manager-embed-certs-335807" [f5563afe-3d6c-4b44-b0c0-765da451fd88] Running
	I1213 00:14:03.249057  177122 system_pods.go:61] "kube-proxy-ccq47" [68f3c55f-175e-40af-a769-65c859d5012d] Running
	I1213 00:14:03.249063  177122 system_pods.go:61] "kube-scheduler-embed-certs-335807" [c989cf08-80e9-4b0f-b0e4-f840c6259ace] Running
	I1213 00:14:03.249074  177122 system_pods.go:61] "metrics-server-57f55c9bc5-z7qb4" [b33959c3-63b7-4a81-adda-6d2971036e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:03.249082  177122 system_pods.go:61] "storage-provisioner" [816660d7-a041-4695-b7da-d977b8891935] Running
	I1213 00:14:03.249095  177122 system_pods.go:74] duration metric: took 179.144496ms to wait for pod list to return data ...
	I1213 00:14:03.249106  177122 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:03.444557  177122 default_sa.go:45] found service account: "default"
	I1213 00:14:03.444591  177122 default_sa.go:55] duration metric: took 195.469108ms for default service account to be created ...
	I1213 00:14:03.444603  177122 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:03.651685  177122 system_pods.go:86] 9 kube-system pods found
	I1213 00:14:03.651714  177122 system_pods.go:89] "coredns-5dd5756b68-gs4kb" [d4b86e83-a0a1-4bf8-958e-e154e91f47ef] Running
	I1213 00:14:03.651719  177122 system_pods.go:89] "coredns-5dd5756b68-t92hd" [1ad2dcb3-bcda-42af-b4ce-0d95bba0315f] Running
	I1213 00:14:03.651723  177122 system_pods.go:89] "etcd-embed-certs-335807" [aa5222a7-5670-4550-9d65-6db2095898be] Running
	I1213 00:14:03.651727  177122 system_pods.go:89] "kube-apiserver-embed-certs-335807" [ca0e9de9-8f6a-4bae-b1d1-04b7c0c3cd4c] Running
	I1213 00:14:03.651731  177122 system_pods.go:89] "kube-controller-manager-embed-certs-335807" [f5563afe-3d6c-4b44-b0c0-765da451fd88] Running
	I1213 00:14:03.651735  177122 system_pods.go:89] "kube-proxy-ccq47" [68f3c55f-175e-40af-a769-65c859d5012d] Running
	I1213 00:14:03.651739  177122 system_pods.go:89] "kube-scheduler-embed-certs-335807" [c989cf08-80e9-4b0f-b0e4-f840c6259ace] Running
	I1213 00:14:03.651745  177122 system_pods.go:89] "metrics-server-57f55c9bc5-z7qb4" [b33959c3-63b7-4a81-adda-6d2971036e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:03.651750  177122 system_pods.go:89] "storage-provisioner" [816660d7-a041-4695-b7da-d977b8891935] Running
	I1213 00:14:03.651758  177122 system_pods.go:126] duration metric: took 207.148805ms to wait for k8s-apps to be running ...
	I1213 00:14:03.651764  177122 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:03.651814  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:03.666068  177122 system_svc.go:56] duration metric: took 14.292973ms WaitForService to wait for kubelet.
	I1213 00:14:03.666093  177122 kubeadm.go:581] duration metric: took 6.266680553s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:03.666109  177122 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:03.845399  177122 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:03.845431  177122 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:03.845447  177122 node_conditions.go:105] duration metric: took 179.332019ms to run NodePressure ...
	I1213 00:14:03.845462  177122 start.go:228] waiting for startup goroutines ...
	I1213 00:14:03.845470  177122 start.go:233] waiting for cluster config update ...
	I1213 00:14:03.845482  177122 start.go:242] writing updated cluster config ...
	I1213 00:14:03.845850  177122 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:03.898374  177122 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1213 00:14:03.900465  177122 out.go:177] * Done! kubectl is now configured to use "embed-certs-335807" cluster and "default" namespace by default
	I1213 00:14:01.693061  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:01.886947  177409 pod_ready.go:81] duration metric: took 4m0.000066225s waiting for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	E1213 00:14:01.886997  177409 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:14:01.887010  177409 pod_ready.go:38] duration metric: took 4m3.203360525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:01.887056  177409 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:01.887093  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:01.887156  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:01.956004  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:01.956029  177409 cri.go:89] found id: ""
	I1213 00:14:01.956038  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:01.956096  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:01.961314  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:01.961388  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:02.001797  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:02.001825  177409 cri.go:89] found id: ""
	I1213 00:14:02.001835  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:02.001881  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.007127  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:02.007193  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:02.050259  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:02.050283  177409 cri.go:89] found id: ""
	I1213 00:14:02.050294  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:02.050347  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.056086  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:02.056147  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:02.125159  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:02.125189  177409 cri.go:89] found id: ""
	I1213 00:14:02.125199  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:02.125261  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.129874  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:02.129939  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:02.175027  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:02.175058  177409 cri.go:89] found id: ""
	I1213 00:14:02.175067  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:02.175127  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.180444  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:02.180515  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:02.219578  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:02.219603  177409 cri.go:89] found id: ""
	I1213 00:14:02.219610  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:02.219664  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.223644  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:02.223693  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:02.260542  177409 cri.go:89] found id: ""
	I1213 00:14:02.260567  177409 logs.go:284] 0 containers: []
	W1213 00:14:02.260575  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:02.260583  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:02.260656  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:02.304058  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:02.304082  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:02.304090  177409 cri.go:89] found id: ""
	I1213 00:14:02.304100  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:02.304159  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.308606  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.312421  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:02.312473  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:02.356415  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:02.356460  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:02.405870  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:02.405902  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:02.876461  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:02.876508  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:03.037302  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:03.037334  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:03.098244  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:03.098273  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:03.163681  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:03.163712  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:03.216883  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:03.216912  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:03.267979  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:03.268011  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:03.309364  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:03.309394  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:03.352427  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:03.352479  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:03.406508  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:03.406547  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:03.449959  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:03.449985  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:02.784516  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:05.284536  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.408895  177307 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.578851358s)
	I1213 00:14:09.408954  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:09.422044  177307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:14:09.430579  177307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:14:09.438689  177307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:14:09.438727  177307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 00:14:09.493519  177307 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1213 00:14:09.493657  177307 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:14:09.648151  177307 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:14:09.648294  177307 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:14:09.648489  177307 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:14:09.908199  177307 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:14:05.974125  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:05.992335  177409 api_server.go:72] duration metric: took 4m12.842684139s to wait for apiserver process to appear ...
	I1213 00:14:05.992364  177409 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:05.992411  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:05.992491  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:06.037770  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:06.037796  177409 cri.go:89] found id: ""
	I1213 00:14:06.037805  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:06.037863  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.042949  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:06.043016  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:06.090863  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:06.090888  177409 cri.go:89] found id: ""
	I1213 00:14:06.090897  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:06.090951  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.103859  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:06.103925  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:06.156957  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:06.156982  177409 cri.go:89] found id: ""
	I1213 00:14:06.156992  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:06.157053  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.162170  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:06.162220  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:06.204839  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:06.204867  177409 cri.go:89] found id: ""
	I1213 00:14:06.204877  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:06.204942  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.210221  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:06.210287  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:06.255881  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:06.255909  177409 cri.go:89] found id: ""
	I1213 00:14:06.255918  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:06.255984  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.260853  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:06.260924  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:06.308377  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:06.308400  177409 cri.go:89] found id: ""
	I1213 00:14:06.308413  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:06.308493  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.315028  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:06.315111  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:06.365453  177409 cri.go:89] found id: ""
	I1213 00:14:06.365484  177409 logs.go:284] 0 containers: []
	W1213 00:14:06.365494  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:06.365507  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:06.365568  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:06.423520  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:06.423545  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:06.423560  177409 cri.go:89] found id: ""
	I1213 00:14:06.423571  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:06.423628  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.429613  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.434283  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:06.434310  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:06.571329  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:06.571375  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:06.613274  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:06.613307  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:06.673407  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:06.673455  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:06.688886  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:06.688933  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:06.733130  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:06.733162  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:06.780131  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:06.780161  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:06.827465  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:06.827500  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:06.880245  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:06.880286  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:06.919735  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:06.919764  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:06.974039  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:06.974074  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:07.400452  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:07.400491  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:07.456759  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:07.456789  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.010686  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:14:10.017803  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I1213 00:14:10.019196  177409 api_server.go:141] control plane version: v1.28.4
	I1213 00:14:10.019216  177409 api_server.go:131] duration metric: took 4.026844615s to wait for apiserver health ...
	I1213 00:14:10.019225  177409 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:10.019251  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:10.019303  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:07.784301  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.785226  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.910151  177307 out.go:204]   - Generating certificates and keys ...
	I1213 00:14:09.910259  177307 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:14:09.910339  177307 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:14:09.910444  177307 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:14:09.910527  177307 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:14:09.910616  177307 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:14:09.910662  177307 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:14:09.910713  177307 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:14:09.910791  177307 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:14:09.910892  177307 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:14:09.911041  177307 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:14:09.911107  177307 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:14:09.911186  177307 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:14:10.262533  177307 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:14:10.508123  177307 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 00:14:10.766822  177307 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:14:10.866565  177307 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:14:11.206659  177307 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:14:11.207238  177307 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:14:11.210018  177307 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:14:10.061672  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.061699  177409 cri.go:89] found id: ""
	I1213 00:14:10.061708  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:10.061769  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.066426  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:10.066491  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:10.107949  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:10.107978  177409 cri.go:89] found id: ""
	I1213 00:14:10.107994  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:10.108053  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.112321  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:10.112393  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:10.169082  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:10.169110  177409 cri.go:89] found id: ""
	I1213 00:14:10.169120  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:10.169175  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.174172  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:10.174225  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:10.220290  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:10.220313  177409 cri.go:89] found id: ""
	I1213 00:14:10.220326  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:10.220384  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.225241  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:10.225310  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:10.271312  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:10.271336  177409 cri.go:89] found id: ""
	I1213 00:14:10.271345  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:10.271401  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.275974  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:10.276049  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:10.324262  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:10.324288  177409 cri.go:89] found id: ""
	I1213 00:14:10.324299  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:10.324360  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.329065  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:10.329130  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:10.375611  177409 cri.go:89] found id: ""
	I1213 00:14:10.375640  177409 logs.go:284] 0 containers: []
	W1213 00:14:10.375648  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:10.375654  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:10.375725  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:10.420778  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:10.420807  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:10.420812  177409 cri.go:89] found id: ""
	I1213 00:14:10.420819  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:10.420866  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.425676  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.430150  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:10.430180  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:10.486314  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:10.486351  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:10.500915  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:10.500946  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:10.543073  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:10.543108  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:10.584779  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:10.584814  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:10.629824  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:10.629852  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:10.756816  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:10.756857  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.807506  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:10.807536  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:10.849398  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:10.849436  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:10.911470  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:10.911508  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:11.288892  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:11.288941  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:11.361299  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:11.361347  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:11.407800  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:11.407850  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:13.965440  177409 system_pods.go:59] 8 kube-system pods found
	I1213 00:14:13.965477  177409 system_pods.go:61] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running
	I1213 00:14:13.965485  177409 system_pods.go:61] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running
	I1213 00:14:13.965493  177409 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running
	I1213 00:14:13.965500  177409 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running
	I1213 00:14:13.965505  177409 system_pods.go:61] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running
	I1213 00:14:13.965509  177409 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running
	I1213 00:14:13.965518  177409 system_pods.go:61] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:13.965528  177409 system_pods.go:61] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running
	I1213 00:14:13.965538  177409 system_pods.go:74] duration metric: took 3.946305195s to wait for pod list to return data ...
	I1213 00:14:13.965548  177409 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:13.969074  177409 default_sa.go:45] found service account: "default"
	I1213 00:14:13.969103  177409 default_sa.go:55] duration metric: took 3.543208ms for default service account to be created ...
	I1213 00:14:13.969114  177409 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:13.977167  177409 system_pods.go:86] 8 kube-system pods found
	I1213 00:14:13.977201  177409 system_pods.go:89] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running
	I1213 00:14:13.977211  177409 system_pods.go:89] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running
	I1213 00:14:13.977219  177409 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running
	I1213 00:14:13.977226  177409 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running
	I1213 00:14:13.977232  177409 system_pods.go:89] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running
	I1213 00:14:13.977238  177409 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running
	I1213 00:14:13.977249  177409 system_pods.go:89] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:13.977257  177409 system_pods.go:89] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running
	I1213 00:14:13.977272  177409 system_pods.go:126] duration metric: took 8.1502ms to wait for k8s-apps to be running ...
	I1213 00:14:13.977288  177409 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:13.977342  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:13.996304  177409 system_svc.go:56] duration metric: took 19.006856ms WaitForService to wait for kubelet.
	I1213 00:14:13.996340  177409 kubeadm.go:581] duration metric: took 4m20.846697962s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:13.996374  177409 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:14.000473  177409 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:14.000505  177409 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:14.000518  177409 node_conditions.go:105] duration metric: took 4.137212ms to run NodePressure ...
	I1213 00:14:14.000534  177409 start.go:228] waiting for startup goroutines ...
	I1213 00:14:14.000544  177409 start.go:233] waiting for cluster config update ...
	I1213 00:14:14.000561  177409 start.go:242] writing updated cluster config ...
	I1213 00:14:14.000901  177409 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:14.059785  177409 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1213 00:14:14.062155  177409 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-743278" cluster and "default" namespace by default
	I1213 00:14:11.212405  177307 out.go:204]   - Booting up control plane ...
	I1213 00:14:11.212538  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:14:11.213865  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:14:11.215312  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:14:11.235356  177307 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:14:11.236645  177307 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:14:11.236755  177307 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1213 00:14:11.385788  177307 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:14:12.284994  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:14.784159  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:19.387966  177307 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002219 seconds
	I1213 00:14:19.402873  177307 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:14:19.424220  177307 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:14:19.954243  177307 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:14:19.954453  177307 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-143586 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 00:14:20.468986  177307 kubeadm.go:322] [bootstrap-token] Using token: nss44e.j85t1ilri9kvvn0e
	I1213 00:14:16.785364  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:19.284214  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:20.470732  177307 out.go:204]   - Configuring RBAC rules ...
	I1213 00:14:20.470866  177307 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:14:20.479490  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 00:14:20.488098  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:14:20.491874  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:14:20.496891  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:14:20.506058  177307 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:14:20.523032  177307 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 00:14:20.796465  177307 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:14:20.892018  177307 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:14:20.892049  177307 kubeadm.go:322] 
	I1213 00:14:20.892159  177307 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:14:20.892185  177307 kubeadm.go:322] 
	I1213 00:14:20.892284  177307 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:14:20.892296  177307 kubeadm.go:322] 
	I1213 00:14:20.892338  177307 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:14:20.892421  177307 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:14:20.892512  177307 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:14:20.892529  177307 kubeadm.go:322] 
	I1213 00:14:20.892620  177307 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1213 00:14:20.892648  177307 kubeadm.go:322] 
	I1213 00:14:20.892734  177307 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 00:14:20.892745  177307 kubeadm.go:322] 
	I1213 00:14:20.892807  177307 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:14:20.892938  177307 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:14:20.893057  177307 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:14:20.893072  177307 kubeadm.go:322] 
	I1213 00:14:20.893182  177307 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 00:14:20.893286  177307 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:14:20.893307  177307 kubeadm.go:322] 
	I1213 00:14:20.893446  177307 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nss44e.j85t1ilri9kvvn0e \
	I1213 00:14:20.893588  177307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:14:20.893625  177307 kubeadm.go:322] 	--control-plane 
	I1213 00:14:20.893634  177307 kubeadm.go:322] 
	I1213 00:14:20.893740  177307 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:14:20.893752  177307 kubeadm.go:322] 
	I1213 00:14:20.893877  177307 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nss44e.j85t1ilri9kvvn0e \
	I1213 00:14:20.894017  177307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:14:20.895217  177307 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:14:20.895249  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:14:20.895261  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:14:20.897262  177307 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:14:20.898838  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:14:20.933446  177307 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:14:20.985336  177307 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:14:20.985435  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:20.985458  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=no-preload-143586 minikube.k8s.io/updated_at=2023_12_13T00_14_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.062513  177307 ops.go:34] apiserver oom_adj: -16
	I1213 00:14:21.374568  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.482135  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:22.088971  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:22.588816  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:23.088960  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:23.588701  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:24.088568  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.783473  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:23.784019  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:25.785712  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:24.588803  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:25.088983  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:25.589097  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:26.088561  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:26.589160  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:27.088601  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:27.588337  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.088578  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.588533  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:29.088398  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.284015  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:30.285509  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:29.588587  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:30.088826  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:30.588871  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:31.089336  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:31.588959  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:32.088390  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:32.589079  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:33.088948  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:33.589067  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:34.089108  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:34.261304  177307 kubeadm.go:1088] duration metric: took 13.275930767s to wait for elevateKubeSystemPrivileges.
	I1213 00:14:34.261367  177307 kubeadm.go:406] StartCluster complete in 5m12.573209179s
	I1213 00:14:34.261392  177307 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:14:34.261511  177307 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:14:34.264237  177307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:14:34.264668  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:14:34.264951  177307 config.go:182] Loaded profile config "no-preload-143586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:14:34.265065  177307 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:14:34.265128  177307 addons.go:69] Setting storage-provisioner=true in profile "no-preload-143586"
	I1213 00:14:34.265150  177307 addons.go:231] Setting addon storage-provisioner=true in "no-preload-143586"
	W1213 00:14:34.265161  177307 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:14:34.265202  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.265231  177307 addons.go:69] Setting default-storageclass=true in profile "no-preload-143586"
	I1213 00:14:34.265262  177307 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-143586"
	I1213 00:14:34.265606  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.265612  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.265627  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.265628  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.265846  177307 addons.go:69] Setting metrics-server=true in profile "no-preload-143586"
	I1213 00:14:34.265878  177307 addons.go:231] Setting addon metrics-server=true in "no-preload-143586"
	W1213 00:14:34.265890  177307 addons.go:240] addon metrics-server should already be in state true
	I1213 00:14:34.265935  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.266231  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.266277  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.287844  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34161
	I1213 00:14:34.287882  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I1213 00:14:34.287968  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
	I1213 00:14:34.288509  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.288529  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.288811  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.289178  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289197  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289310  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289325  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289335  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289347  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289707  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289713  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289736  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289891  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.290392  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.290398  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.290415  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.290417  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.293696  177307 addons.go:231] Setting addon default-storageclass=true in "no-preload-143586"
	W1213 00:14:34.293725  177307 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:14:34.293756  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.294150  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.294187  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.309103  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I1213 00:14:34.309683  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.310362  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.310387  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.310830  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.311091  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.312755  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37041
	I1213 00:14:34.313192  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.313601  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.313796  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.313814  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.316496  177307 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:14:34.314223  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.316102  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41945
	I1213 00:14:34.318112  177307 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:14:34.318127  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:14:34.318144  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.318260  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.318670  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.318693  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.319401  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.319422  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.319860  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.320080  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.321977  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.323695  177307 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:14:34.322509  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.325025  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:14:34.325037  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:14:34.325053  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.323731  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.325089  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.323250  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.325250  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.325428  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.325563  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.328055  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.328364  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.328386  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.328712  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.328867  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.328980  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.329099  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.339175  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35817
	I1213 00:14:34.339820  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.340300  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.340314  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.340662  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.340821  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.342399  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.342673  177307 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:14:34.342694  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:14:34.342720  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.345475  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.345804  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.345839  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.346062  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.346256  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.346453  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.346622  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.425634  177307 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-143586" context rescaled to 1 replicas
	I1213 00:14:34.425672  177307 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:14:34.427471  177307 out.go:177] * Verifying Kubernetes components...
	I1213 00:14:32.783642  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:34.786810  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:34.428983  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:34.589995  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:14:34.590692  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:14:34.592452  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:14:34.592472  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:14:34.643312  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:14:34.643336  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:14:34.649786  177307 node_ready.go:35] waiting up to 6m0s for node "no-preload-143586" to be "Ready" ...
	I1213 00:14:34.649926  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:14:34.683306  177307 node_ready.go:49] node "no-preload-143586" has status "Ready":"True"
	I1213 00:14:34.683339  177307 node_ready.go:38] duration metric: took 33.525188ms waiting for node "no-preload-143586" to be "Ready" ...
	I1213 00:14:34.683352  177307 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:34.711542  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:14:34.711570  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:14:34.738788  177307 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-8fb8b" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:34.823110  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:14:35.743550  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.153515373s)
	I1213 00:14:35.743618  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.743634  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.743661  177307 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.093703901s)
	I1213 00:14:35.743611  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.152891747s)
	I1213 00:14:35.743699  177307 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1213 00:14:35.743719  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.743732  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.744060  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.744059  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.744088  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.744100  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.744114  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.744114  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.744158  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.744195  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.744209  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.744223  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.745779  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.745829  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.745855  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.745838  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.745797  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.745790  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.757271  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.757292  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.757758  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.757776  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.757787  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:36.114702  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291538738s)
	I1213 00:14:36.114760  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:36.114773  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:36.115132  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:36.115149  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:36.115158  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:36.115168  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:36.115411  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:36.115426  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:36.115436  177307 addons.go:467] Verifying addon metrics-server=true in "no-preload-143586"
	I1213 00:14:36.117975  177307 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1213 00:14:36.119554  177307 addons.go:502] enable addons completed in 1.85448385s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1213 00:14:37.069993  177307 pod_ready.go:102] pod "coredns-76f75df574-8fb8b" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:38.563525  177307 pod_ready.go:92] pod "coredns-76f75df574-8fb8b" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.563551  177307 pod_ready.go:81] duration metric: took 3.824732725s waiting for pod "coredns-76f75df574-8fb8b" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.563561  177307 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hs9rg" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.565949  177307 pod_ready.go:97] error getting pod "coredns-76f75df574-hs9rg" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-hs9rg" not found
	I1213 00:14:38.565976  177307 pod_ready.go:81] duration metric: took 2.409349ms waiting for pod "coredns-76f75df574-hs9rg" in "kube-system" namespace to be "Ready" ...
	E1213 00:14:38.565984  177307 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-hs9rg" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-hs9rg" not found
	I1213 00:14:38.565990  177307 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.571396  177307 pod_ready.go:92] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.571416  177307 pod_ready.go:81] duration metric: took 5.419634ms waiting for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.571424  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.576228  177307 pod_ready.go:92] pod "kube-apiserver-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.576248  177307 pod_ready.go:81] duration metric: took 4.818853ms waiting for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.576256  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.581260  177307 pod_ready.go:92] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.581281  177307 pod_ready.go:81] duration metric: took 5.019621ms waiting for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.581289  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xsdtr" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.760984  177307 pod_ready.go:92] pod "kube-proxy-xsdtr" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.761006  177307 pod_ready.go:81] duration metric: took 179.711484ms waiting for pod "kube-proxy-xsdtr" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.761015  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:39.160713  177307 pod_ready.go:92] pod "kube-scheduler-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:39.160738  177307 pod_ready.go:81] duration metric: took 399.716844ms waiting for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:39.160746  177307 pod_ready.go:38] duration metric: took 4.477382003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:39.160762  177307 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:39.160809  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:39.176747  177307 api_server.go:72] duration metric: took 4.751030848s to wait for apiserver process to appear ...
	I1213 00:14:39.176774  177307 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:39.176791  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:14:39.183395  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 200:
	ok
	I1213 00:14:39.184769  177307 api_server.go:141] control plane version: v1.29.0-rc.2
	I1213 00:14:39.184789  177307 api_server.go:131] duration metric: took 8.009007ms to wait for apiserver health ...
	I1213 00:14:39.184799  177307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:39.364215  177307 system_pods.go:59] 8 kube-system pods found
	I1213 00:14:39.364251  177307 system_pods.go:61] "coredns-76f75df574-8fb8b" [3ca4e237-6a35-4e8f-9731-3f9655eba995] Running
	I1213 00:14:39.364256  177307 system_pods.go:61] "etcd-no-preload-143586" [205757f5-5bd7-416c-af72-6ebf428d7302] Running
	I1213 00:14:39.364260  177307 system_pods.go:61] "kube-apiserver-no-preload-143586" [a479caf2-8ff4-4b78-8d93-e8f672a853b9] Running
	I1213 00:14:39.364265  177307 system_pods.go:61] "kube-controller-manager-no-preload-143586" [4afd5e8a-64b3-4e0c-a723-b6a6bd2445d4] Running
	I1213 00:14:39.364269  177307 system_pods.go:61] "kube-proxy-xsdtr" [23a261a4-17d1-4657-8052-02b71055c850] Running
	I1213 00:14:39.364273  177307 system_pods.go:61] "kube-scheduler-no-preload-143586" [71312243-0ba6-40e0-80a5-c1652b5270e9] Running
	I1213 00:14:39.364280  177307 system_pods.go:61] "metrics-server-57f55c9bc5-q7v45" [1579f5c9-d574-4ab8-9add-e89621b9c203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:39.364284  177307 system_pods.go:61] "storage-provisioner" [400b27cc-1713-4201-8097-3e3fd8004690] Running
	I1213 00:14:39.364292  177307 system_pods.go:74] duration metric: took 179.488069ms to wait for pod list to return data ...
	I1213 00:14:39.364301  177307 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:39.560330  177307 default_sa.go:45] found service account: "default"
	I1213 00:14:39.560364  177307 default_sa.go:55] duration metric: took 196.056049ms for default service account to be created ...
	I1213 00:14:39.560376  177307 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:39.763340  177307 system_pods.go:86] 8 kube-system pods found
	I1213 00:14:39.763384  177307 system_pods.go:89] "coredns-76f75df574-8fb8b" [3ca4e237-6a35-4e8f-9731-3f9655eba995] Running
	I1213 00:14:39.763393  177307 system_pods.go:89] "etcd-no-preload-143586" [205757f5-5bd7-416c-af72-6ebf428d7302] Running
	I1213 00:14:39.763400  177307 system_pods.go:89] "kube-apiserver-no-preload-143586" [a479caf2-8ff4-4b78-8d93-e8f672a853b9] Running
	I1213 00:14:39.763405  177307 system_pods.go:89] "kube-controller-manager-no-preload-143586" [4afd5e8a-64b3-4e0c-a723-b6a6bd2445d4] Running
	I1213 00:14:39.763409  177307 system_pods.go:89] "kube-proxy-xsdtr" [23a261a4-17d1-4657-8052-02b71055c850] Running
	I1213 00:14:39.763414  177307 system_pods.go:89] "kube-scheduler-no-preload-143586" [71312243-0ba6-40e0-80a5-c1652b5270e9] Running
	I1213 00:14:39.763426  177307 system_pods.go:89] "metrics-server-57f55c9bc5-q7v45" [1579f5c9-d574-4ab8-9add-e89621b9c203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:39.763434  177307 system_pods.go:89] "storage-provisioner" [400b27cc-1713-4201-8097-3e3fd8004690] Running
	I1213 00:14:39.763449  177307 system_pods.go:126] duration metric: took 203.065345ms to wait for k8s-apps to be running ...
	I1213 00:14:39.763458  177307 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:39.763517  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:39.783072  177307 system_svc.go:56] duration metric: took 19.601725ms WaitForService to wait for kubelet.
	I1213 00:14:39.783120  177307 kubeadm.go:581] duration metric: took 5.357406192s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:39.783147  177307 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:39.962475  177307 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:39.962501  177307 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:39.962511  177307 node_conditions.go:105] duration metric: took 179.359327ms to run NodePressure ...
	I1213 00:14:39.962524  177307 start.go:228] waiting for startup goroutines ...
	I1213 00:14:39.962532  177307 start.go:233] waiting for cluster config update ...
	I1213 00:14:39.962544  177307 start.go:242] writing updated cluster config ...
	I1213 00:14:39.962816  177307 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:40.016206  177307 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1213 00:14:40.018375  177307 out.go:177] * Done! kubectl is now configured to use "no-preload-143586" cluster and "default" namespace by default
	I1213 00:14:37.286105  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:39.786060  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:42.285678  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:44.784213  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:47.285680  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:49.783428  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:51.785923  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:54.283780  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:56.783343  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:59.283053  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:15:00.976984  176813 pod_ready.go:81] duration metric: took 4m0.000041493s waiting for pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace to be "Ready" ...
	E1213 00:15:00.977016  176813 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:15:00.977037  176813 pod_ready.go:38] duration metric: took 4m1.19985839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:00.977064  176813 kubeadm.go:640] restartCluster took 5m6.659231001s
	W1213 00:15:00.977141  176813 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:15:00.977178  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:15:07.653665  176813 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.676456274s)
	I1213 00:15:07.653745  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:15:07.673981  176813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:15:07.688018  176813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:15:07.699196  176813 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:15:07.699244  176813 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1213 00:15:07.761890  176813 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1213 00:15:07.762010  176813 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:15:07.921068  176813 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:15:07.921220  176813 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:15:07.921360  176813 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:15:08.151937  176813 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:15:08.152063  176813 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:15:08.159296  176813 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1213 00:15:08.285060  176813 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:15:08.286903  176813 out.go:204]   - Generating certificates and keys ...
	I1213 00:15:08.287074  176813 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:15:08.287174  176813 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:15:08.290235  176813 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:15:08.290397  176813 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:15:08.290878  176813 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:15:08.291179  176813 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:15:08.291663  176813 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:15:08.292342  176813 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:15:08.292822  176813 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:15:08.293259  176813 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:15:08.293339  176813 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:15:08.293429  176813 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:15:08.526145  176813 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:15:08.586842  176813 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:15:08.636575  176813 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:15:08.706448  176813 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:15:08.710760  176813 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:15:08.713664  176813 out.go:204]   - Booting up control plane ...
	I1213 00:15:08.713773  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:15:08.718431  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:15:08.719490  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:15:08.720327  176813 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:15:08.722707  176813 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:15:19.226839  176813 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503804 seconds
	I1213 00:15:19.227005  176813 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:15:19.245054  176813 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:15:19.773910  176813 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:15:19.774100  176813 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-508612 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1213 00:15:20.284136  176813 kubeadm.go:322] [bootstrap-token] Using token: lgq05i.maaa534t8w734gvq
	I1213 00:15:20.286042  176813 out.go:204]   - Configuring RBAC rules ...
	I1213 00:15:20.286186  176813 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:15:20.297875  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:15:20.305644  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:15:20.314089  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:15:20.319091  176813 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:15:20.387872  176813 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:15:20.733546  176813 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:15:20.735072  176813 kubeadm.go:322] 
	I1213 00:15:20.735157  176813 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:15:20.735168  176813 kubeadm.go:322] 
	I1213 00:15:20.735280  176813 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:15:20.735291  176813 kubeadm.go:322] 
	I1213 00:15:20.735314  176813 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:15:20.735389  176813 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:15:20.735451  176813 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:15:20.735459  176813 kubeadm.go:322] 
	I1213 00:15:20.735517  176813 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:15:20.735602  176813 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:15:20.735660  176813 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:15:20.735666  176813 kubeadm.go:322] 
	I1213 00:15:20.735757  176813 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1213 00:15:20.735867  176813 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:15:20.735889  176813 kubeadm.go:322] 
	I1213 00:15:20.736036  176813 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lgq05i.maaa534t8w734gvq \
	I1213 00:15:20.736152  176813 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:15:20.736223  176813 kubeadm.go:322]     --control-plane 	  
	I1213 00:15:20.736240  176813 kubeadm.go:322] 
	I1213 00:15:20.736348  176813 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:15:20.736357  176813 kubeadm.go:322] 
	I1213 00:15:20.736472  176813 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lgq05i.maaa534t8w734gvq \
	I1213 00:15:20.736596  176813 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:15:20.737307  176813 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:15:20.737332  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:15:20.737340  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:15:20.739085  176813 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:15:20.740295  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:15:20.749618  176813 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:15:20.767876  176813 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:15:20.767933  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:20.767984  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=old-k8s-version-508612 minikube.k8s.io/updated_at=2023_12_13T00_15_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.051677  176813 ops.go:34] apiserver oom_adj: -16
	I1213 00:15:21.051709  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.148546  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.741424  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:22.240885  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:22.741651  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:23.241662  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:23.741098  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:24.241530  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:24.741035  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:25.241391  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:25.741004  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:26.241402  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:26.741333  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:27.241828  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:27.741151  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:28.240933  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:28.741661  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:29.241431  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:29.741667  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:30.241070  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:30.741117  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:31.241355  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:31.741697  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:32.241779  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:32.741165  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:33.241739  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:33.741499  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:34.241477  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:34.740804  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:35.241596  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:35.374344  176813 kubeadm.go:1088] duration metric: took 14.606462065s to wait for elevateKubeSystemPrivileges.
	I1213 00:15:35.374388  176813 kubeadm.go:406] StartCluster complete in 5m41.120911791s
	I1213 00:15:35.374416  176813 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:15:35.374522  176813 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:15:35.376587  176813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:15:35.376829  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:15:35.376896  176813 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:15:35.376998  176813 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377018  176813 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377026  176813 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-508612"
	W1213 00:15:35.377036  176813 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:15:35.377038  176813 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377075  176813 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-508612"
	W1213 00:15:35.377089  176813 addons.go:240] addon metrics-server should already be in state true
	I1213 00:15:35.377107  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.377140  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.377536  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.377569  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.377577  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.377603  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.377036  176813 config.go:182] Loaded profile config "old-k8s-version-508612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1213 00:15:35.377038  176813 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-508612"
	I1213 00:15:35.378232  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.378269  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.396758  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I1213 00:15:35.397242  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.397563  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
	I1213 00:15:35.397732  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I1213 00:15:35.398240  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.398249  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.398768  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.398789  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.398927  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.398944  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.399039  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.399048  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.399144  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399485  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399506  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399699  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.399783  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.399822  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.400014  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.400052  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.403424  176813 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-508612"
	W1213 00:15:35.403445  176813 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:15:35.403470  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.403784  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.403809  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.419742  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I1213 00:15:35.419763  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I1213 00:15:35.420351  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.420378  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.420912  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.420927  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.421042  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.421062  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.421403  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.421450  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.421588  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.421633  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.422473  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I1213 00:15:35.423216  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.423818  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.423875  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.423890  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.426328  176813 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:15:35.424310  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.424522  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.428333  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:15:35.428351  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:15:35.428377  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.430256  176813 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:15:35.428950  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.430439  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.431959  176813 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:15:35.431260  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.431816  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.432011  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.431977  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:15:35.432031  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.432047  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.432199  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.432359  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.432587  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.434239  176813 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-508612" context rescaled to 1 replicas
	I1213 00:15:35.434275  176813 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:15:35.435769  176813 out.go:177] * Verifying Kubernetes components...
	I1213 00:15:35.437082  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:15:35.434982  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.435627  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.437148  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.437186  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.437343  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.437515  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.437646  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.450115  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I1213 00:15:35.450582  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.451077  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.451104  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.451548  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.451822  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.453721  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.454034  176813 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:15:35.454052  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:15:35.454072  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.456976  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.457326  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.457351  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.457530  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.457709  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.457859  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.458008  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.599631  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:15:35.607268  176813 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-508612" to be "Ready" ...
	I1213 00:15:35.607407  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:15:35.627686  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:15:35.627720  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:15:35.641865  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:15:35.653972  176813 node_ready.go:49] node "old-k8s-version-508612" has status "Ready":"True"
	I1213 00:15:35.654008  176813 node_ready.go:38] duration metric: took 46.699606ms waiting for node "old-k8s-version-508612" to be "Ready" ...
	I1213 00:15:35.654022  176813 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:35.701904  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:15:35.701939  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:15:35.722752  176813 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:35.779684  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:15:35.779719  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:15:35.871071  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:15:36.486377  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486409  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486428  176813 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1213 00:15:36.486495  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486513  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486715  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.486725  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.486734  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486741  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486816  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.486826  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.486834  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486843  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.487015  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.487022  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.487048  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.487156  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.487172  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.487186  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.535004  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.535026  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.535335  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.535394  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.535407  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.671282  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.671308  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.671649  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.671719  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.671739  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.671758  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.671771  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.672067  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.672091  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.672092  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.672102  176813 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-508612"
	I1213 00:15:36.673881  176813 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1213 00:15:36.675200  176813 addons.go:502] enable addons completed in 1.298322525s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1213 00:15:37.860212  176813 pod_ready.go:102] pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace has status "Ready":"False"
	I1213 00:15:40.350347  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace has status "Ready":"True"
	I1213 00:15:40.350370  176813 pod_ready.go:81] duration metric: took 4.627584432s waiting for pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.350383  176813 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wz29m" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.356218  176813 pod_ready.go:92] pod "kube-proxy-wz29m" in "kube-system" namespace has status "Ready":"True"
	I1213 00:15:40.356240  176813 pod_ready.go:81] duration metric: took 5.84816ms waiting for pod "kube-proxy-wz29m" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.356252  176813 pod_ready.go:38] duration metric: took 4.702215033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:40.356270  176813 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:15:40.356324  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:15:40.372391  176813 api_server.go:72] duration metric: took 4.938079614s to wait for apiserver process to appear ...
	I1213 00:15:40.372424  176813 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:15:40.372459  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:15:40.378882  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1213 00:15:40.379747  176813 api_server.go:141] control plane version: v1.16.0
	I1213 00:15:40.379770  176813 api_server.go:131] duration metric: took 7.338199ms to wait for apiserver health ...
	I1213 00:15:40.379780  176813 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:15:40.383090  176813 system_pods.go:59] 4 kube-system pods found
	I1213 00:15:40.383110  176813 system_pods.go:61] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.383115  176813 system_pods.go:61] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.383121  176813 system_pods.go:61] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.383126  176813 system_pods.go:61] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.383133  176813 system_pods.go:74] duration metric: took 3.346988ms to wait for pod list to return data ...
	I1213 00:15:40.383140  176813 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:15:40.385822  176813 default_sa.go:45] found service account: "default"
	I1213 00:15:40.385843  176813 default_sa.go:55] duration metric: took 2.696485ms for default service account to be created ...
	I1213 00:15:40.385851  176813 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:15:40.390030  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.390056  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.390061  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.390068  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.390072  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.390094  176813 retry.go:31] will retry after 206.30305ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:40.602546  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.602577  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.602582  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.602589  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.602593  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.602611  176813 retry.go:31] will retry after 375.148566ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:40.987598  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.987626  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.987631  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.987639  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.987645  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.987663  176813 retry.go:31] will retry after 354.607581ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:41.347931  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:41.347965  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:41.347974  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:41.347984  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:41.347992  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:41.348012  176813 retry.go:31] will retry after 443.179207ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:41.796661  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:41.796687  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:41.796692  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:41.796711  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:41.796716  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:41.796733  176813 retry.go:31] will retry after 468.875458ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:42.271565  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:42.271591  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:42.271596  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:42.271603  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:42.271608  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:42.271624  176813 retry.go:31] will retry after 696.629881ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:42.974971  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:42.974997  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:42.975003  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:42.975009  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:42.975015  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:42.975031  176813 retry.go:31] will retry after 830.83436ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:43.810755  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:43.810784  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:43.810792  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:43.810802  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:43.810808  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:43.810830  176813 retry.go:31] will retry after 1.429308487s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:45.245813  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:45.245844  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:45.245852  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:45.245862  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:45.245867  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:45.245887  176813 retry.go:31] will retry after 1.715356562s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:46.966484  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:46.966512  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:46.966517  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:46.966523  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:46.966529  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:46.966546  176813 retry.go:31] will retry after 2.125852813s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:49.097419  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:49.097450  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:49.097460  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:49.097472  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:49.097478  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:49.097496  176813 retry.go:31] will retry after 2.902427415s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:52.005062  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:52.005097  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:52.005106  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:52.005119  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:52.005128  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:52.005154  176813 retry.go:31] will retry after 3.461524498s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:55.471450  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:55.471474  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:55.471480  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:55.471487  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:55.471492  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:55.471509  176813 retry.go:31] will retry after 2.969353102s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:58.445285  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:58.445316  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:58.445324  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:58.445334  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:58.445341  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:58.445363  176813 retry.go:31] will retry after 3.938751371s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:02.389811  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:02.389839  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:02.389845  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:02.389851  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:02.389856  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:02.389873  176813 retry.go:31] will retry after 5.281550171s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:07.676759  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:07.676786  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:07.676791  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:07.676798  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:07.676802  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:07.676820  176813 retry.go:31] will retry after 8.193775139s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:15.875917  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:15.875946  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:15.875951  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:15.875958  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:15.875962  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:15.875980  176813 retry.go:31] will retry after 8.515960159s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:24.397972  176813 system_pods.go:86] 5 kube-system pods found
	I1213 00:16:24.398006  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:24.398014  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:24.398021  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:24.398032  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:24.398039  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:24.398060  176813 retry.go:31] will retry after 10.707543157s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:35.112639  176813 system_pods.go:86] 7 kube-system pods found
	I1213 00:16:35.112667  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:35.112672  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:35.112677  176813 system_pods.go:89] "kube-controller-manager-old-k8s-version-508612" [c6a195a2-e710-4791-b0a3-32618d3c752c] Running
	I1213 00:16:35.112681  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:35.112685  176813 system_pods.go:89] "kube-scheduler-old-k8s-version-508612" [6ee728bb-d81b-43f2-8aa3-4848871a8f41] Running
	I1213 00:16:35.112691  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:35.112696  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:35.112712  176813 retry.go:31] will retry after 13.429366805s: missing components: kube-apiserver
	I1213 00:16:48.550673  176813 system_pods.go:86] 8 kube-system pods found
	I1213 00:16:48.550704  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:48.550710  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:48.550714  176813 system_pods.go:89] "kube-apiserver-old-k8s-version-508612" [1473501b-d17d-4bbb-a61a-1d244f54f70c] Running
	I1213 00:16:48.550718  176813 system_pods.go:89] "kube-controller-manager-old-k8s-version-508612" [c6a195a2-e710-4791-b0a3-32618d3c752c] Running
	I1213 00:16:48.550722  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:48.550726  176813 system_pods.go:89] "kube-scheduler-old-k8s-version-508612" [6ee728bb-d81b-43f2-8aa3-4848871a8f41] Running
	I1213 00:16:48.550733  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:48.550737  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:48.550747  176813 system_pods.go:126] duration metric: took 1m8.164889078s to wait for k8s-apps to be running ...
	I1213 00:16:48.550756  176813 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:16:48.550811  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:16:48.568833  176813 system_svc.go:56] duration metric: took 18.062353ms WaitForService to wait for kubelet.
	I1213 00:16:48.568876  176813 kubeadm.go:581] duration metric: took 1m13.134572871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:16:48.568901  176813 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:16:48.573103  176813 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:16:48.573128  176813 node_conditions.go:123] node cpu capacity is 2
	I1213 00:16:48.573137  176813 node_conditions.go:105] duration metric: took 4.231035ms to run NodePressure ...
	I1213 00:16:48.573148  176813 start.go:228] waiting for startup goroutines ...
	I1213 00:16:48.573154  176813 start.go:233] waiting for cluster config update ...
	I1213 00:16:48.573163  176813 start.go:242] writing updated cluster config ...
	I1213 00:16:48.573436  176813 ssh_runner.go:195] Run: rm -f paused
	I1213 00:16:48.627109  176813 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1213 00:16:48.628688  176813 out.go:177] 
	W1213 00:16:48.630154  176813 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1213 00:16:48.631498  176813 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1213 00:16:48.633089  176813 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-508612" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-13 00:09:14 UTC, ends at Wed 2023-12-13 00:23:15 UTC. --
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.785322567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702426995785312156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=54631c87-df27-477b-9e5a-67354b2eac59 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.785956764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a22ee101-e141-4860-a7ec-797d9046e415 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.786026843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a22ee101-e141-4860-a7ec-797d9046e415 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.786298353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426221357803656,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bb98c21bcff8726228600821eacf235eb2353d7e4e4d8a88630582809c061e,PodSandboxId:ec031568c78d926e873d15c56f30e3012b5c398ea4bbd975f9e90a08e7fb17ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702426199308460147,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3227111a-055e-48bc-abe1-5162c09b58da,},Annotations:map[string]string{io.kubernetes.container.hash: b2e47e3c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad,PodSandboxId:975f035f145ad7a5dbf2062d52fc4129cafc249b2d3be62dd4714336bbd7b535,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426195822454391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ftv9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d9730b-2e6b-4263-a70d-273cf6837f60,},Annotations:map[string]string{io.kubernetes.container.hash: b3e9de0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702426190137027744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41,PodSandboxId:a0fefa8877f017ebeb4bc2b3102f04d8a8344acd0e2440054d81f908ca4858a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426189879010423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zk4wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
20fe8f7-0c1f-4be3-8184-cd3d6cc19a43,},Annotations:map[string]string{io.kubernetes.container.hash: b6cc6198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7,PodSandboxId:c985b7ae69ba37bc8f7025da34682fcb5118ba566c618e7dc4cbed049b75416b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426182958030058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26c02c6afbf1d5b4c9f495af669df65,},An
notations:map[string]string{io.kubernetes.container.hash: 8fc23393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee,PodSandboxId:a568f2ed2f3063586c863022fc223732c8fb2e89aa61194154ac76a0e815ab72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426182646324616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc52e0e7f1079fa01d7df8c37839c50,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673,PodSandboxId:0a36632b4feccd3eca6a32fae7b9eaa141825401eac3db221dbf8f7f131a034b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426182514141514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
72fbc2300f80dc239672c9760e6a959,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1,PodSandboxId:6a049b2f1db3f4f51b1007c573f5e82a975fee935eb69fed92d815cfed5858d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426182170181549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
4b09929ec1461cf7ee413fae258f89e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2ca634,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a22ee101-e141-4860-a7ec-797d9046e415 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.834313387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=302900f8-e8e5-4645-889d-4897c9bc4104 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.834404471Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=302900f8-e8e5-4645-889d-4897c9bc4104 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.836029052Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bd914373-0333-4d1e-b4d3-bdd57a59652a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.836418821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702426995836405912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=bd914373-0333-4d1e-b4d3-bdd57a59652a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.837046531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=58309554-157a-4ed4-8033-3665e6a98251 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.837102947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=58309554-157a-4ed4-8033-3665e6a98251 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.837315799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426221357803656,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bb98c21bcff8726228600821eacf235eb2353d7e4e4d8a88630582809c061e,PodSandboxId:ec031568c78d926e873d15c56f30e3012b5c398ea4bbd975f9e90a08e7fb17ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702426199308460147,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3227111a-055e-48bc-abe1-5162c09b58da,},Annotations:map[string]string{io.kubernetes.container.hash: b2e47e3c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad,PodSandboxId:975f035f145ad7a5dbf2062d52fc4129cafc249b2d3be62dd4714336bbd7b535,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426195822454391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ftv9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d9730b-2e6b-4263-a70d-273cf6837f60,},Annotations:map[string]string{io.kubernetes.container.hash: b3e9de0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702426190137027744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41,PodSandboxId:a0fefa8877f017ebeb4bc2b3102f04d8a8344acd0e2440054d81f908ca4858a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426189879010423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zk4wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
20fe8f7-0c1f-4be3-8184-cd3d6cc19a43,},Annotations:map[string]string{io.kubernetes.container.hash: b6cc6198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7,PodSandboxId:c985b7ae69ba37bc8f7025da34682fcb5118ba566c618e7dc4cbed049b75416b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426182958030058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26c02c6afbf1d5b4c9f495af669df65,},An
notations:map[string]string{io.kubernetes.container.hash: 8fc23393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee,PodSandboxId:a568f2ed2f3063586c863022fc223732c8fb2e89aa61194154ac76a0e815ab72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426182646324616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc52e0e7f1079fa01d7df8c37839c50,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673,PodSandboxId:0a36632b4feccd3eca6a32fae7b9eaa141825401eac3db221dbf8f7f131a034b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426182514141514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
72fbc2300f80dc239672c9760e6a959,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1,PodSandboxId:6a049b2f1db3f4f51b1007c573f5e82a975fee935eb69fed92d815cfed5858d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426182170181549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
4b09929ec1461cf7ee413fae258f89e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2ca634,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=58309554-157a-4ed4-8033-3665e6a98251 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.884931635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fa77904d-fb6f-40c2-a022-b799b670cc93 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.884988868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fa77904d-fb6f-40c2-a022-b799b670cc93 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.886311636Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9c38bd0f-a64f-4fe6-9d9b-d30d5c4ee21f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.886928522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702426995886912265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9c38bd0f-a64f-4fe6-9d9b-d30d5c4ee21f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.888133331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=959b34ea-27b1-42bd-b3c6-b578a94912ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.888180343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=959b34ea-27b1-42bd-b3c6-b578a94912ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.888417013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426221357803656,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bb98c21bcff8726228600821eacf235eb2353d7e4e4d8a88630582809c061e,PodSandboxId:ec031568c78d926e873d15c56f30e3012b5c398ea4bbd975f9e90a08e7fb17ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702426199308460147,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3227111a-055e-48bc-abe1-5162c09b58da,},Annotations:map[string]string{io.kubernetes.container.hash: b2e47e3c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad,PodSandboxId:975f035f145ad7a5dbf2062d52fc4129cafc249b2d3be62dd4714336bbd7b535,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426195822454391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ftv9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d9730b-2e6b-4263-a70d-273cf6837f60,},Annotations:map[string]string{io.kubernetes.container.hash: b3e9de0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702426190137027744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41,PodSandboxId:a0fefa8877f017ebeb4bc2b3102f04d8a8344acd0e2440054d81f908ca4858a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426189879010423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zk4wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
20fe8f7-0c1f-4be3-8184-cd3d6cc19a43,},Annotations:map[string]string{io.kubernetes.container.hash: b6cc6198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7,PodSandboxId:c985b7ae69ba37bc8f7025da34682fcb5118ba566c618e7dc4cbed049b75416b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426182958030058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26c02c6afbf1d5b4c9f495af669df65,},An
notations:map[string]string{io.kubernetes.container.hash: 8fc23393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee,PodSandboxId:a568f2ed2f3063586c863022fc223732c8fb2e89aa61194154ac76a0e815ab72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426182646324616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc52e0e7f1079fa01d7df8c37839c50,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673,PodSandboxId:0a36632b4feccd3eca6a32fae7b9eaa141825401eac3db221dbf8f7f131a034b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426182514141514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
72fbc2300f80dc239672c9760e6a959,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1,PodSandboxId:6a049b2f1db3f4f51b1007c573f5e82a975fee935eb69fed92d815cfed5858d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426182170181549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
4b09929ec1461cf7ee413fae258f89e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2ca634,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=959b34ea-27b1-42bd-b3c6-b578a94912ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.922871085Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6b65bbf5-94ae-409b-948b-22756beb514e name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.922968301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6b65bbf5-94ae-409b-948b-22756beb514e name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.924821460Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4512f326-d9b0-4625-87be-4ef0c70b0391 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.925202998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702426995925185338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4512f326-d9b0-4625-87be-4ef0c70b0391 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.925855180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=573aa061-b777-4c50-9aa6-6ae21b2d6987 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.925909412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=573aa061-b777-4c50-9aa6-6ae21b2d6987 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:15 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:23:15.926094485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426221357803656,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bb98c21bcff8726228600821eacf235eb2353d7e4e4d8a88630582809c061e,PodSandboxId:ec031568c78d926e873d15c56f30e3012b5c398ea4bbd975f9e90a08e7fb17ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702426199308460147,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3227111a-055e-48bc-abe1-5162c09b58da,},Annotations:map[string]string{io.kubernetes.container.hash: b2e47e3c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad,PodSandboxId:975f035f145ad7a5dbf2062d52fc4129cafc249b2d3be62dd4714336bbd7b535,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426195822454391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ftv9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d9730b-2e6b-4263-a70d-273cf6837f60,},Annotations:map[string]string{io.kubernetes.container.hash: b3e9de0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702426190137027744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41,PodSandboxId:a0fefa8877f017ebeb4bc2b3102f04d8a8344acd0e2440054d81f908ca4858a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426189879010423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zk4wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
20fe8f7-0c1f-4be3-8184-cd3d6cc19a43,},Annotations:map[string]string{io.kubernetes.container.hash: b6cc6198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7,PodSandboxId:c985b7ae69ba37bc8f7025da34682fcb5118ba566c618e7dc4cbed049b75416b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426182958030058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26c02c6afbf1d5b4c9f495af669df65,},An
notations:map[string]string{io.kubernetes.container.hash: 8fc23393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee,PodSandboxId:a568f2ed2f3063586c863022fc223732c8fb2e89aa61194154ac76a0e815ab72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426182646324616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc52e0e7f1079fa01d7df8c37839c50,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673,PodSandboxId:0a36632b4feccd3eca6a32fae7b9eaa141825401eac3db221dbf8f7f131a034b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426182514141514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
72fbc2300f80dc239672c9760e6a959,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1,PodSandboxId:6a049b2f1db3f4f51b1007c573f5e82a975fee935eb69fed92d815cfed5858d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426182170181549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
4b09929ec1461cf7ee413fae258f89e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2ca634,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=573aa061-b777-4c50-9aa6-6ae21b2d6987 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c290417afdb45       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   81a2e8655210b       storage-provisioner
	c8bb98c21bcff       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   ec031568c78d9       busybox
	125252879d69a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   975f035f145ad       coredns-5dd5756b68-ftv9l
	705b27e3bd760       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   81a2e8655210b       storage-provisioner
	545581d8fb2dd       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   a0fefa8877f01       kube-proxy-zk4wl
	fd8469f4d2e98       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   c985b7ae69ba3       etcd-default-k8s-diff-port-743278
	c94b9bf453ae3       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   a568f2ed2f306       kube-scheduler-default-k8s-diff-port-743278
	57e6249b6837d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   0a36632b4fecc       kube-controller-manager-default-k8s-diff-port-743278
	c4c918252a292       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   6a049b2f1db3f       kube-apiserver-default-k8s-diff-port-743278
	
	* 
	* ==> coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32823 - 224 "HINFO IN 3325233440478565840.819332321352294558. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015804027s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-743278
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-743278
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=default-k8s-diff-port-743278
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_13T00_01_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Dec 2023 00:01:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-743278
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Dec 2023 00:23:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Dec 2023 00:20:31 +0000   Wed, 13 Dec 2023 00:01:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Dec 2023 00:20:31 +0000   Wed, 13 Dec 2023 00:01:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Dec 2023 00:20:31 +0000   Wed, 13 Dec 2023 00:01:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Dec 2023 00:20:31 +0000   Wed, 13 Dec 2023 00:09:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.144
	  Hostname:    default-k8s-diff-port-743278
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 01ab38812de04a528da538e9dc0b7d5c
	  System UUID:                01ab3881-2de0-4a52-8da5-38e9dc0b7d5c
	  Boot ID:                    9a33e9f0-dbcd-4523-b2ac-2b7554456859
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-ftv9l                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-743278                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-743278             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-743278    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-zk4wl                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-743278             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-6q9jg                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-743278 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-743278 event: Registered Node default-k8s-diff-port-743278 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-743278 event: Registered Node default-k8s-diff-port-743278 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec13 00:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075372] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.700866] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.655871] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.163833] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000081] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.650326] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000069] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.307485] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.116687] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.162977] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.131569] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.235780] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +17.473233] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	[Dec13 00:10] kauditd_printk_skb: 29 callbacks suppressed
	
	* 
	* ==> etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] <==
	* {"level":"info","ts":"2023-12-13T00:09:51.425032Z","caller":"traceutil/trace.go:171","msg":"trace[650216120] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:507; }","duration":"205.626156ms","start":"2023-12-13T00:09:51.219397Z","end":"2023-12-13T00:09:51.425023Z","steps":["trace[650216120] 'agreement among raft nodes before linearized reading'  (duration: 204.850093ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-13T00:09:52.069331Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"514.178952ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15328437152858814584 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da7a8a1\" mod_revision:505 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da7a8a1\" value_size:690 lease:6105065116004038750 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da7a8a1\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-13T00:09:52.070508Z","caller":"traceutil/trace.go:171","msg":"trace[633026732] linearizableReadLoop","detail":"{readStateIndex:541; appliedIndex:539; }","duration":"555.643749ms","start":"2023-12-13T00:09:51.514851Z","end":"2023-12-13T00:09:52.070494Z","steps":["trace[633026732] 'read index received'  (duration: 40.190774ms)","trace[633026732] 'applied index is now lower than readState.Index'  (duration: 515.452115ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-13T00:09:52.070767Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"555.957103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-6q9jg\" ","response":"range_response_count:1 size:3866"}
	{"level":"info","ts":"2023-12-13T00:09:52.070853Z","caller":"traceutil/trace.go:171","msg":"trace[785219015] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-6q9jg; range_end:; response_count:1; response_revision:509; }","duration":"556.051038ms","start":"2023-12-13T00:09:51.514793Z","end":"2023-12-13T00:09:52.070844Z","steps":["trace[785219015] 'agreement among raft nodes before linearized reading'  (duration: 555.801537ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-13T00:09:52.070913Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-13T00:09:51.514778Z","time spent":"556.122994ms","remote":"127.0.0.1:57354","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":3890,"request content":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-6q9jg\" "}
	{"level":"info","ts":"2023-12-13T00:09:52.071088Z","caller":"traceutil/trace.go:171","msg":"trace[1869975272] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"638.637093ms","start":"2023-12-13T00:09:51.432441Z","end":"2023-12-13T00:09:52.071078Z","steps":["trace[1869975272] 'process raft request'  (duration: 122.592468ms)","trace[1869975272] 'compare'  (duration: 513.547539ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-13T00:09:52.07118Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-13T00:09:51.43243Z","time spent":"638.702068ms","remote":"127.0.0.1:57330","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":778,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da7a8a1\" mod_revision:505 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da7a8a1\" value_size:690 lease:6105065116004038750 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da7a8a1\" > >"}
	{"level":"info","ts":"2023-12-13T00:09:52.071194Z","caller":"traceutil/trace.go:171","msg":"trace[317082638] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"637.835882ms","start":"2023-12-13T00:09:51.433345Z","end":"2023-12-13T00:09:52.071181Z","steps":["trace[317082638] 'process raft request'  (duration: 637.084881ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-13T00:09:52.071379Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-13T00:09:51.433336Z","time spent":"638.010942ms","remote":"127.0.0.1:57354","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3558,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:490 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3504 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"warn","ts":"2023-12-13T00:09:52.849252Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.292409ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15328437152858814589 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da7b98e\" mod_revision:506 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da7b98e\" value_size:688 lease:6105065116004038750 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da7b98e\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-13T00:09:52.849392Z","caller":"traceutil/trace.go:171","msg":"trace[754923752] linearizableReadLoop","detail":"{readStateIndex:542; appliedIndex:541; }","duration":"648.609624ms","start":"2023-12-13T00:09:52.200773Z","end":"2023-12-13T00:09:52.849383Z","steps":["trace[754923752] 'read index received'  (duration: 383.093573ms)","trace[754923752] 'applied index is now lower than readState.Index'  (duration: 265.514973ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-13T00:09:52.849444Z","caller":"traceutil/trace.go:171","msg":"trace[1150931609] transaction","detail":"{read_only:false; response_revision:510; number_of_response:1; }","duration":"770.516974ms","start":"2023-12-13T00:09:52.078921Z","end":"2023-12-13T00:09:52.849438Z","steps":["trace[1150931609] 'process raft request'  (duration: 504.98592ms)","trace[1150931609] 'compare'  (duration: 264.419149ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-13T00:09:52.849481Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-13T00:09:52.07891Z","time spent":"770.54701ms","remote":"127.0.0.1:57330","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":776,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da7b98e\" mod_revision:506 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da7b98e\" value_size:688 lease:6105065116004038750 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da7b98e\" > >"}
	{"level":"warn","ts":"2023-12-13T00:09:52.8498Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"649.050261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4012"}
	{"level":"info","ts":"2023-12-13T00:09:52.849825Z","caller":"traceutil/trace.go:171","msg":"trace[89440002] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:510; }","duration":"649.079443ms","start":"2023-12-13T00:09:52.200739Z","end":"2023-12-13T00:09:52.849819Z","steps":["trace[89440002] 'agreement among raft nodes before linearized reading'  (duration: 648.967899ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-13T00:09:52.849888Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-13T00:09:52.200719Z","time spent":"649.162259ms","remote":"127.0.0.1:57414","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4036,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"warn","ts":"2023-12-13T00:09:52.850005Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.870598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:764"}
	{"level":"info","ts":"2023-12-13T00:09:52.850019Z","caller":"traceutil/trace.go:171","msg":"trace[962130386] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:510; }","duration":"170.885646ms","start":"2023-12-13T00:09:52.679129Z","end":"2023-12-13T00:09:52.850015Z","steps":["trace[962130386] 'agreement among raft nodes before linearized reading'  (duration: 170.852842ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-13T00:09:53.456175Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.23288ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15328437152858814593 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da690c4\" mod_revision:507 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da690c4\" value_size:694 lease:6105065116004038750 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da690c4\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-13T00:09:53.456319Z","caller":"traceutil/trace.go:171","msg":"trace[1877575358] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"596.82342ms","start":"2023-12-13T00:09:52.859479Z","end":"2023-12-13T00:09:53.456302Z","steps":["trace[1877575358] 'process raft request'  (duration: 425.396858ms)","trace[1877575358] 'compare'  (duration: 170.993407ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-13T00:09:53.456409Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-13T00:09:52.859461Z","time spent":"596.892067ms","remote":"127.0.0.1:57330","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":782,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da690c4\" mod_revision:507 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da690c4\" value_size:694 lease:6105065116004038750 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-743278.17a03b977da690c4\" > >"}
	{"level":"info","ts":"2023-12-13T00:19:46.341446Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":825}
	{"level":"info","ts":"2023-12-13T00:19:46.34423Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":825,"took":"2.172345ms","hash":3109910370}
	{"level":"info","ts":"2023-12-13T00:19:46.344314Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3109910370,"revision":825,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  00:23:16 up 14 min,  0 users,  load average: 0.06, 0.16, 0.14
	Linux default-k8s-diff-port-743278 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] <==
	* I1213 00:19:48.274429       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1213 00:19:49.274035       1 handler_proxy.go:93] no RequestInfo found in the context
	W1213 00:19:49.274035       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:19:49.274314       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:19:49.274323       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1213 00:19:49.274359       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:19:49.276274       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:20:48.131435       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1213 00:20:49.274899       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:20:49.274999       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:20:49.275010       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:20:49.277470       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:20:49.277679       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:20:49.277730       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:21:48.131059       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1213 00:22:48.131656       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1213 00:22:49.275792       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:22:49.276019       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:22:49.276119       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:22:49.277916       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:22:49.278047       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:22:49.278056       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] <==
	* I1213 00:17:31.862376       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:18:01.373281       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:18:01.871913       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:18:31.378846       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:18:31.881503       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:19:01.384978       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:19:01.891348       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:19:31.391250       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:19:31.902528       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:20:01.396940       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:20:01.911124       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:20:31.403047       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:20:31.922938       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1213 00:21:00.138546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="864.731µs"
	E1213 00:21:01.409532       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:21:01.932487       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1213 00:21:15.140981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="193.169µs"
	E1213 00:21:31.414006       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:21:31.941009       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:22:01.420372       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:22:01.949105       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:22:31.425584       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:22:31.957339       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:23:01.432401       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:23:01.967705       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] <==
	* I1213 00:09:50.161458       1 server_others.go:69] "Using iptables proxy"
	I1213 00:09:50.182264       1 node.go:141] Successfully retrieved node IP: 192.168.72.144
	I1213 00:09:50.423929       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1213 00:09:50.424018       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 00:09:50.435326       1 server_others.go:152] "Using iptables Proxier"
	I1213 00:09:50.435446       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 00:09:50.435691       1 server.go:846] "Version info" version="v1.28.4"
	I1213 00:09:50.443059       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 00:09:50.447834       1 config.go:188] "Starting service config controller"
	I1213 00:09:50.447984       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 00:09:50.448042       1 config.go:97] "Starting endpoint slice config controller"
	I1213 00:09:50.448081       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 00:09:50.448785       1 config.go:315] "Starting node config controller"
	I1213 00:09:50.448822       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 00:09:50.549786       1 shared_informer.go:318] Caches are synced for node config
	I1213 00:09:50.549839       1 shared_informer.go:318] Caches are synced for service config
	I1213 00:09:50.549936       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] <==
	* I1213 00:09:45.149971       1 serving.go:348] Generated self-signed cert in-memory
	W1213 00:09:48.194725       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 00:09:48.194898       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 00:09:48.194914       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 00:09:48.195013       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 00:09:48.268450       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1213 00:09:48.268548       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 00:09:48.276449       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 00:09:48.276498       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1213 00:09:48.281076       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1213 00:09:48.281175       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1213 00:09:48.377578       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-13 00:09:14 UTC, ends at Wed 2023-12-13 00:23:16 UTC. --
	Dec 13 00:20:41 default-k8s-diff-port-743278 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:20:41 default-k8s-diff-port-743278 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:20:48 default-k8s-diff-port-743278 kubelet[914]: E1213 00:20:48.133228     914 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 00:20:48 default-k8s-diff-port-743278 kubelet[914]: E1213 00:20:48.133333     914 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 13 00:20:48 default-k8s-diff-port-743278 kubelet[914]: E1213 00:20:48.133677     914 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-24d9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-6q9jg_kube-system(b1849258-4fd1-43a5-b67b-02d8e44acd8b): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 13 00:20:48 default-k8s-diff-port-743278 kubelet[914]: E1213 00:20:48.133773     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:21:00 default-k8s-diff-port-743278 kubelet[914]: E1213 00:21:00.118389     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:21:15 default-k8s-diff-port-743278 kubelet[914]: E1213 00:21:15.123892     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:21:26 default-k8s-diff-port-743278 kubelet[914]: E1213 00:21:26.118150     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:21:37 default-k8s-diff-port-743278 kubelet[914]: E1213 00:21:37.119894     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:21:41 default-k8s-diff-port-743278 kubelet[914]: E1213 00:21:41.135684     914 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:21:41 default-k8s-diff-port-743278 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:21:41 default-k8s-diff-port-743278 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:21:41 default-k8s-diff-port-743278 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:21:48 default-k8s-diff-port-743278 kubelet[914]: E1213 00:21:48.118392     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:22:01 default-k8s-diff-port-743278 kubelet[914]: E1213 00:22:01.118373     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:22:13 default-k8s-diff-port-743278 kubelet[914]: E1213 00:22:13.117898     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:22:26 default-k8s-diff-port-743278 kubelet[914]: E1213 00:22:26.117573     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:22:39 default-k8s-diff-port-743278 kubelet[914]: E1213 00:22:39.118801     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:22:41 default-k8s-diff-port-743278 kubelet[914]: E1213 00:22:41.135283     914 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:22:41 default-k8s-diff-port-743278 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:22:41 default-k8s-diff-port-743278 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:22:41 default-k8s-diff-port-743278 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:22:52 default-k8s-diff-port-743278 kubelet[914]: E1213 00:22:52.118728     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:23:06 default-k8s-diff-port-743278 kubelet[914]: E1213 00:23:06.118488     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	
	* 
	* ==> storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] <==
	* I1213 00:09:50.487844       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 00:10:20.490481       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] <==
	* I1213 00:10:21.485573       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 00:10:21.499559       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 00:10:21.499829       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 00:10:38.907235       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 00:10:38.907582       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7b29af0b-eb3e-4d78-a9af-aaad07e4d87b", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-743278_2c120ac1-e14b-4d75-9fd9-1814f0326f46 became leader
	I1213 00:10:38.907737       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-743278_2c120ac1-e14b-4d75-9fd9-1814f0326f46!
	I1213 00:10:39.007874       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-743278_2c120ac1-e14b-4d75-9fd9-1814f0326f46!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-743278 -n default-k8s-diff-port-743278
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-743278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-6q9jg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-743278 describe pod metrics-server-57f55c9bc5-6q9jg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-743278 describe pod metrics-server-57f55c9bc5-6q9jg: exit status 1 (68.511726ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-6q9jg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-743278 describe pod metrics-server-57f55c9bc5-6q9jg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1213 00:15:11.804901  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1213 00:16:34.853668  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-143586 -n no-preload-143586
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-13 00:23:40.586680531 +0000 UTC m=+5344.142818074
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143586 -n no-preload-143586
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-143586 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-143586 logs -n 25: (1.618848137s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-380248                              | cert-expiration-380248       | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p pause-042245                                        | pause-042245                 | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343019 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	|         | disable-driver-mounts-343019                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:01 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-508612        | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-335807            | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143586             | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-743278  | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC | 13 Dec 23 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC |                     |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-508612             | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC | 13 Dec 23 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-335807                 | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143586                  | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-743278       | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/13 00:04:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 00:04:40.034430  177409 out.go:296] Setting OutFile to fd 1 ...
	I1213 00:04:40.034592  177409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:04:40.034601  177409 out.go:309] Setting ErrFile to fd 2...
	I1213 00:04:40.034606  177409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:04:40.034805  177409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1213 00:04:40.035357  177409 out.go:303] Setting JSON to false
	I1213 00:04:40.036280  177409 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10028,"bootTime":1702415852,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 00:04:40.036342  177409 start.go:138] virtualization: kvm guest
	I1213 00:04:40.038707  177409 out.go:177] * [default-k8s-diff-port-743278] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 00:04:40.040139  177409 out.go:177]   - MINIKUBE_LOCATION=17777
	I1213 00:04:40.040129  177409 notify.go:220] Checking for updates...
	I1213 00:04:40.041788  177409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 00:04:40.043246  177409 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:04:40.044627  177409 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1213 00:04:40.046091  177409 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 00:04:40.047562  177409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 00:04:40.049427  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:04:40.049930  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:04:40.049979  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:04:40.064447  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44247
	I1213 00:04:40.064825  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:04:40.065333  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:04:40.065352  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:04:40.065686  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:04:40.065850  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:04:40.066092  177409 driver.go:392] Setting default libvirt URI to qemu:///system
	I1213 00:04:40.066357  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:04:40.066389  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:04:40.080217  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38499
	I1213 00:04:40.080643  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:04:40.081072  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:04:40.081098  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:04:40.081436  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:04:40.081622  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:04:40.114108  177409 out.go:177] * Using the kvm2 driver based on existing profile
	I1213 00:04:40.115585  177409 start.go:298] selected driver: kvm2
	I1213 00:04:40.115603  177409 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:04:40.115714  177409 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 00:04:40.116379  177409 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:04:40.116485  177409 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 00:04:40.131964  177409 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1213 00:04:40.132324  177409 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 00:04:40.132392  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:04:40.132405  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:04:40.132416  177409 start_flags.go:323] config:
	{Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-74327
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:04:40.132599  177409 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:04:40.135330  177409 out.go:177] * Starting control plane node default-k8s-diff-port-743278 in cluster default-k8s-diff-port-743278
	I1213 00:04:36.772718  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:39.844694  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:40.136912  177409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:04:40.136959  177409 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1213 00:04:40.136972  177409 cache.go:56] Caching tarball of preloaded images
	I1213 00:04:40.137094  177409 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 00:04:40.137108  177409 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1213 00:04:40.137215  177409 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/config.json ...
	I1213 00:04:40.137413  177409 start.go:365] acquiring machines lock for default-k8s-diff-port-743278: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:04:45.924700  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:48.996768  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:55.076732  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:58.148779  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:04.228721  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:07.300700  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:13.380743  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:16.452690  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:22.532695  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:25.604771  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:31.684681  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:34.756720  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:40.836697  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:43.908711  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:49.988729  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:53.060691  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:59.140737  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:02.212709  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:08.292717  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:11.364746  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:17.444722  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:20.516796  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:26.596650  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:29.668701  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:35.748723  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:38.820688  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:44.900719  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:47.972683  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:54.052708  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:57.124684  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:03.204728  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:06.276720  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:12.356681  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:15.428743  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:21.508696  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:24.580749  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:30.660747  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:33.732746  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:39.812738  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:42.884767  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:48.964744  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:52.036691  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:58.116726  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:01.188638  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:07.268756  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:10.340725  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:13.345031  177122 start.go:369] acquired machines lock for "embed-certs-335807" in 4m2.39512191s
	I1213 00:08:13.345120  177122 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:08:13.345129  177122 fix.go:54] fixHost starting: 
	I1213 00:08:13.345524  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:08:13.345564  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:08:13.360333  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I1213 00:08:13.360832  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:08:13.361366  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:08:13.361390  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:08:13.361769  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:08:13.361941  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:13.362104  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:08:13.363919  177122 fix.go:102] recreateIfNeeded on embed-certs-335807: state=Stopped err=<nil>
	I1213 00:08:13.363938  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	W1213 00:08:13.364125  177122 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:08:13.366077  177122 out.go:177] * Restarting existing kvm2 VM for "embed-certs-335807" ...
	I1213 00:08:13.342763  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:08:13.342804  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:08:13.344878  176813 machine.go:91] provisioned docker machine in 4m37.409041046s
	I1213 00:08:13.344942  176813 fix.go:56] fixHost completed within 4m37.430106775s
	I1213 00:08:13.344949  176813 start.go:83] releasing machines lock for "old-k8s-version-508612", held for 4m37.430132032s
	W1213 00:08:13.344965  176813 start.go:694] error starting host: provision: host is not running
	W1213 00:08:13.345107  176813 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1213 00:08:13.345120  176813 start.go:709] Will try again in 5 seconds ...
	I1213 00:08:13.367310  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Start
	I1213 00:08:13.367451  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring networks are active...
	I1213 00:08:13.368551  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring network default is active
	I1213 00:08:13.368936  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring network mk-embed-certs-335807 is active
	I1213 00:08:13.369290  177122 main.go:141] libmachine: (embed-certs-335807) Getting domain xml...
	I1213 00:08:13.369993  177122 main.go:141] libmachine: (embed-certs-335807) Creating domain...
	I1213 00:08:14.617766  177122 main.go:141] libmachine: (embed-certs-335807) Waiting to get IP...
	I1213 00:08:14.618837  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:14.619186  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:14.619322  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:14.619202  177987 retry.go:31] will retry after 226.757968ms: waiting for machine to come up
	I1213 00:08:14.847619  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:14.847962  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:14.847996  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:14.847892  177987 retry.go:31] will retry after 390.063287ms: waiting for machine to come up
	I1213 00:08:15.239515  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:15.239906  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:15.239939  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:15.239845  177987 retry.go:31] will retry after 341.644988ms: waiting for machine to come up
	I1213 00:08:15.583408  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:15.583848  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:15.583878  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:15.583796  177987 retry.go:31] will retry after 420.722896ms: waiting for machine to come up
	I1213 00:08:18.346616  176813 start.go:365] acquiring machines lock for old-k8s-version-508612: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:08:16.006364  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:16.006767  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:16.006803  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:16.006713  177987 retry.go:31] will retry after 548.041925ms: waiting for machine to come up
	I1213 00:08:16.556444  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:16.556880  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:16.556912  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:16.556833  177987 retry.go:31] will retry after 862.959808ms: waiting for machine to come up
	I1213 00:08:17.421147  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:17.421596  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:17.421630  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:17.421544  177987 retry.go:31] will retry after 1.085782098s: waiting for machine to come up
	I1213 00:08:18.509145  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:18.509595  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:18.509619  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:18.509556  177987 retry.go:31] will retry after 1.303432656s: waiting for machine to come up
	I1213 00:08:19.814985  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:19.815430  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:19.815473  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:19.815367  177987 retry.go:31] will retry after 1.337474429s: waiting for machine to come up
	I1213 00:08:21.154792  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:21.155213  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:21.155236  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:21.155165  177987 retry.go:31] will retry after 2.104406206s: waiting for machine to come up
	I1213 00:08:23.262615  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:23.263144  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:23.263174  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:23.263066  177987 retry.go:31] will retry after 2.064696044s: waiting for machine to come up
	I1213 00:08:25.330105  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:25.330586  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:25.330621  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:25.330544  177987 retry.go:31] will retry after 2.270537288s: waiting for machine to come up
	I1213 00:08:27.602267  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:27.602787  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:27.602810  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:27.602758  177987 retry.go:31] will retry after 3.020844169s: waiting for machine to come up
	I1213 00:08:30.626232  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:30.626696  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:30.626731  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:30.626645  177987 retry.go:31] will retry after 5.329279261s: waiting for machine to come up
	I1213 00:08:37.405257  177307 start.go:369] acquired machines lock for "no-preload-143586" in 4m8.02482326s
	I1213 00:08:37.405329  177307 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:08:37.405340  177307 fix.go:54] fixHost starting: 
	I1213 00:08:37.405777  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:08:37.405830  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:08:37.422055  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41433
	I1213 00:08:37.422558  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:08:37.423112  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:08:37.423143  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:08:37.423462  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:08:37.423650  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:08:37.423795  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:08:37.425302  177307 fix.go:102] recreateIfNeeded on no-preload-143586: state=Stopped err=<nil>
	I1213 00:08:37.425345  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	W1213 00:08:37.425519  177307 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:08:37.428723  177307 out.go:177] * Restarting existing kvm2 VM for "no-preload-143586" ...
	I1213 00:08:35.958579  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.959166  177122 main.go:141] libmachine: (embed-certs-335807) Found IP for machine: 192.168.61.249
	I1213 00:08:35.959200  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has current primary IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.959212  177122 main.go:141] libmachine: (embed-certs-335807) Reserving static IP address...
	I1213 00:08:35.959676  177122 main.go:141] libmachine: (embed-certs-335807) Reserved static IP address: 192.168.61.249
	I1213 00:08:35.959731  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "embed-certs-335807", mac: "52:54:00:20:1b:c0", ip: "192.168.61.249"} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:35.959746  177122 main.go:141] libmachine: (embed-certs-335807) Waiting for SSH to be available...
	I1213 00:08:35.959779  177122 main.go:141] libmachine: (embed-certs-335807) DBG | skip adding static IP to network mk-embed-certs-335807 - found existing host DHCP lease matching {name: "embed-certs-335807", mac: "52:54:00:20:1b:c0", ip: "192.168.61.249"}
	I1213 00:08:35.959795  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Getting to WaitForSSH function...
	I1213 00:08:35.962033  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.962419  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:35.962448  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.962552  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Using SSH client type: external
	I1213 00:08:35.962575  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa (-rw-------)
	I1213 00:08:35.962608  177122 main.go:141] libmachine: (embed-certs-335807) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:08:35.962624  177122 main.go:141] libmachine: (embed-certs-335807) DBG | About to run SSH command:
	I1213 00:08:35.962637  177122 main.go:141] libmachine: (embed-certs-335807) DBG | exit 0
	I1213 00:08:36.056268  177122 main.go:141] libmachine: (embed-certs-335807) DBG | SSH cmd err, output: <nil>: 
	I1213 00:08:36.056649  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetConfigRaw
	I1213 00:08:36.057283  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:36.060244  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.060656  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.060705  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.060930  177122 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/config.json ...
	I1213 00:08:36.061132  177122 machine.go:88] provisioning docker machine ...
	I1213 00:08:36.061150  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:36.061386  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.061569  177122 buildroot.go:166] provisioning hostname "embed-certs-335807"
	I1213 00:08:36.061593  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.061737  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.063997  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.064352  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.064374  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.064532  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.064743  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.064899  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.065039  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.065186  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.065556  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.065575  177122 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-335807 && echo "embed-certs-335807" | sudo tee /etc/hostname
	I1213 00:08:36.199697  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-335807
	
	I1213 00:08:36.199733  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.202879  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.203289  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.203312  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.203495  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.203705  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.203845  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.203968  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.204141  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.204545  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.204564  177122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-335807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-335807/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-335807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:08:36.336285  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:08:36.336315  177122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:08:36.336337  177122 buildroot.go:174] setting up certificates
	I1213 00:08:36.336350  177122 provision.go:83] configureAuth start
	I1213 00:08:36.336364  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.336658  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:36.339327  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.339695  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.339727  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.339861  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.342106  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.342485  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.342506  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.342613  177122 provision.go:138] copyHostCerts
	I1213 00:08:36.342699  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:08:36.342711  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:08:36.342795  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:08:36.342910  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:08:36.342928  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:08:36.342962  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:08:36.343051  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:08:36.343061  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:08:36.343099  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:08:36.343185  177122 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.embed-certs-335807 san=[192.168.61.249 192.168.61.249 localhost 127.0.0.1 minikube embed-certs-335807]
	I1213 00:08:36.680595  177122 provision.go:172] copyRemoteCerts
	I1213 00:08:36.680687  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:08:36.680715  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.683411  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.683664  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.683690  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.683826  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.684044  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.684217  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.684370  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:36.773978  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:08:36.795530  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1213 00:08:36.817104  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 00:08:36.838510  177122 provision.go:86] duration metric: configureAuth took 502.141764ms
	I1213 00:08:36.838544  177122 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:08:36.838741  177122 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:08:36.838818  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.841372  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.841720  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.841759  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.841875  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.842095  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.842276  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.842447  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.842593  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.843043  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.843069  177122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:08:37.150317  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:08:37.150364  177122 machine.go:91] provisioned docker machine in 1.089215763s
	I1213 00:08:37.150378  177122 start.go:300] post-start starting for "embed-certs-335807" (driver="kvm2")
	I1213 00:08:37.150391  177122 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:08:37.150424  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.150800  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:08:37.150829  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.153552  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.153920  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.153958  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.154075  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.154268  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.154406  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.154562  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.245839  177122 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:08:37.249929  177122 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:08:37.249959  177122 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:08:37.250029  177122 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:08:37.250114  177122 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:08:37.250202  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:08:37.258062  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:08:37.280034  177122 start.go:303] post-start completed in 129.642247ms
	I1213 00:08:37.280060  177122 fix.go:56] fixHost completed within 23.934930358s
	I1213 00:08:37.280085  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.282572  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.282861  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.282903  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.283059  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.283333  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.283516  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.283694  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.283898  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:37.284217  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:37.284229  177122 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:08:37.405050  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426117.378231894
	
	I1213 00:08:37.405077  177122 fix.go:206] guest clock: 1702426117.378231894
	I1213 00:08:37.405099  177122 fix.go:219] Guest: 2023-12-13 00:08:37.378231894 +0000 UTC Remote: 2023-12-13 00:08:37.280064166 +0000 UTC m=+266.483341520 (delta=98.167728ms)
	I1213 00:08:37.405127  177122 fix.go:190] guest clock delta is within tolerance: 98.167728ms
	I1213 00:08:37.405137  177122 start.go:83] releasing machines lock for "embed-certs-335807", held for 24.060057368s
	I1213 00:08:37.405161  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.405417  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:37.408128  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.408513  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.408559  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.408681  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409264  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409449  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409542  177122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:08:37.409611  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.409647  177122 ssh_runner.go:195] Run: cat /version.json
	I1213 00:08:37.409673  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.412390  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412733  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.412764  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412795  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412910  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.413104  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.413187  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.413224  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.413263  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.413462  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.413455  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.413633  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.413758  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.413899  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.531948  177122 ssh_runner.go:195] Run: systemctl --version
	I1213 00:08:37.537555  177122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:08:37.677429  177122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:08:37.684043  177122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:08:37.684115  177122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:08:37.702304  177122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:08:37.702327  177122 start.go:475] detecting cgroup driver to use...
	I1213 00:08:37.702388  177122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:08:37.716601  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:08:37.728516  177122 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:08:37.728571  177122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:08:37.740595  177122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:08:37.753166  177122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:08:37.853095  177122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:08:37.970696  177122 docker.go:219] disabling docker service ...
	I1213 00:08:37.970769  177122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:08:37.983625  177122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:08:37.994924  177122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:08:38.110057  177122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:08:38.229587  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:08:38.243052  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:08:38.260480  177122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:08:38.260547  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.269442  177122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:08:38.269508  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.278569  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.287680  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.296798  177122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:08:38.306247  177122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:08:38.314189  177122 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:08:38.314251  177122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:08:38.326702  177122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:08:38.335111  177122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:08:38.435024  177122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:08:38.600232  177122 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:08:38.600322  177122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:08:38.606384  177122 start.go:543] Will wait 60s for crictl version
	I1213 00:08:38.606446  177122 ssh_runner.go:195] Run: which crictl
	I1213 00:08:38.611180  177122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:08:38.654091  177122 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:08:38.654197  177122 ssh_runner.go:195] Run: crio --version
	I1213 00:08:38.705615  177122 ssh_runner.go:195] Run: crio --version
	I1213 00:08:38.755387  177122 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1213 00:08:37.430037  177307 main.go:141] libmachine: (no-preload-143586) Calling .Start
	I1213 00:08:37.430266  177307 main.go:141] libmachine: (no-preload-143586) Ensuring networks are active...
	I1213 00:08:37.430931  177307 main.go:141] libmachine: (no-preload-143586) Ensuring network default is active
	I1213 00:08:37.431290  177307 main.go:141] libmachine: (no-preload-143586) Ensuring network mk-no-preload-143586 is active
	I1213 00:08:37.431640  177307 main.go:141] libmachine: (no-preload-143586) Getting domain xml...
	I1213 00:08:37.432281  177307 main.go:141] libmachine: (no-preload-143586) Creating domain...
	I1213 00:08:38.686491  177307 main.go:141] libmachine: (no-preload-143586) Waiting to get IP...
	I1213 00:08:38.687472  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:38.688010  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:38.688095  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:38.687986  178111 retry.go:31] will retry after 246.453996ms: waiting for machine to come up
	I1213 00:08:38.936453  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:38.936931  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:38.936963  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:38.936879  178111 retry.go:31] will retry after 317.431088ms: waiting for machine to come up
	I1213 00:08:39.256641  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:39.257217  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:39.257241  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:39.257165  178111 retry.go:31] will retry after 379.635912ms: waiting for machine to come up
	I1213 00:08:38.757019  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:38.760125  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:38.760684  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:38.760720  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:38.760949  177122 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1213 00:08:38.765450  177122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:08:38.778459  177122 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:08:38.778539  177122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:08:38.819215  177122 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1213 00:08:38.819281  177122 ssh_runner.go:195] Run: which lz4
	I1213 00:08:38.823481  177122 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:08:38.829034  177122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:08:38.829069  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1213 00:08:40.721922  177122 crio.go:444] Took 1.898469 seconds to copy over tarball
	I1213 00:08:40.721984  177122 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:08:39.638611  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:39.639108  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:39.639137  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:39.639067  178111 retry.go:31] will retry after 596.16391ms: waiting for machine to come up
	I1213 00:08:40.237504  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:40.237957  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:40.237990  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:40.237911  178111 retry.go:31] will retry after 761.995315ms: waiting for machine to come up
	I1213 00:08:41.002003  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:41.002388  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:41.002413  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:41.002329  178111 retry.go:31] will retry after 693.578882ms: waiting for machine to come up
	I1213 00:08:41.697126  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:41.697617  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:41.697652  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:41.697555  178111 retry.go:31] will retry after 1.050437275s: waiting for machine to come up
	I1213 00:08:42.749227  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:42.749833  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:42.749866  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:42.749782  178111 retry.go:31] will retry after 1.175916736s: waiting for machine to come up
	I1213 00:08:43.927564  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:43.928115  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:43.928144  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:43.928065  178111 retry.go:31] will retry after 1.590924957s: waiting for machine to come up
	I1213 00:08:43.767138  177122 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.045121634s)
	I1213 00:08:43.767169  177122 crio.go:451] Took 3.045224 seconds to extract the tarball
	I1213 00:08:43.767178  177122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:08:43.809047  177122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:08:43.873704  177122 crio.go:496] all images are preloaded for cri-o runtime.
	I1213 00:08:43.873726  177122 cache_images.go:84] Images are preloaded, skipping loading
	I1213 00:08:43.873792  177122 ssh_runner.go:195] Run: crio config
	I1213 00:08:43.941716  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:08:43.941747  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:08:43.941774  177122 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:08:43.941800  177122 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.249 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-335807 NodeName:embed-certs-335807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:08:43.942026  177122 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-335807"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:08:43.942123  177122 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-335807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-335807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:08:43.942201  177122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1213 00:08:43.951461  177122 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:08:43.951550  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:08:43.960491  177122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1213 00:08:43.976763  177122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:08:43.993725  177122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1213 00:08:44.010795  177122 ssh_runner.go:195] Run: grep 192.168.61.249	control-plane.minikube.internal$ /etc/hosts
	I1213 00:08:44.014668  177122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.249	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:08:44.027339  177122 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807 for IP: 192.168.61.249
	I1213 00:08:44.027376  177122 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:08:44.027550  177122 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:08:44.027617  177122 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:08:44.027701  177122 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/client.key
	I1213 00:08:44.027786  177122 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.key.ba34ddd8
	I1213 00:08:44.027844  177122 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.key
	I1213 00:08:44.027987  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:08:44.028035  177122 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:08:44.028056  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:08:44.028088  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:08:44.028129  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:08:44.028158  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:08:44.028220  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:08:44.029033  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:08:44.054023  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 00:08:44.078293  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:08:44.102083  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 00:08:44.126293  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:08:44.149409  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:08:44.172887  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:08:44.195662  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:08:44.218979  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:08:44.241598  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:08:44.265251  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:08:44.290073  177122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:08:44.306685  177122 ssh_runner.go:195] Run: openssl version
	I1213 00:08:44.312422  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:08:44.322405  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.327215  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.327296  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.333427  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:08:44.343574  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:08:44.353981  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.358997  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.359051  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.364654  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:08:44.375147  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:08:44.384900  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.389492  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.389553  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.395105  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:08:44.404656  177122 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:08:44.409852  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:08:44.415755  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:08:44.421911  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:08:44.428119  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:08:44.435646  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:08:44.441692  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:08:44.447849  177122 kubeadm.go:404] StartCluster: {Name:embed-certs-335807 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-335807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.249 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:08:44.447976  177122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:08:44.448025  177122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:08:44.495646  177122 cri.go:89] found id: ""
	I1213 00:08:44.495744  177122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:08:44.506405  177122 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:08:44.506454  177122 kubeadm.go:636] restartCluster start
	I1213 00:08:44.506515  177122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:08:44.516110  177122 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:44.517275  177122 kubeconfig.go:92] found "embed-certs-335807" server: "https://192.168.61.249:8443"
	I1213 00:08:44.519840  177122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:08:44.529214  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:44.529294  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:44.540415  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:44.540447  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:44.540497  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:44.552090  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.052810  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:45.052890  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:45.066300  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.552897  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:45.553031  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:45.564969  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.520191  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:45.520729  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:45.520754  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:45.520662  178111 retry.go:31] will retry after 1.407916355s: waiting for machine to come up
	I1213 00:08:46.930655  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:46.931073  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:46.931138  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:46.930993  178111 retry.go:31] will retry after 2.033169427s: waiting for machine to come up
	I1213 00:08:48.966888  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:48.967318  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:48.967351  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:48.967253  178111 retry.go:31] will retry after 2.277791781s: waiting for machine to come up
	I1213 00:08:46.052915  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:46.053025  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:46.068633  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:46.552208  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:46.552317  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:46.565045  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:47.052533  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:47.052627  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:47.068457  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:47.553040  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:47.553127  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:47.564657  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:48.052228  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:48.052322  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:48.068950  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:48.553171  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:48.553256  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:48.568868  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:49.052389  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:49.052515  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:49.064674  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:49.552894  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:49.553012  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:49.564302  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:50.052843  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:50.052941  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:50.064617  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:50.553231  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:50.553316  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:50.567944  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:51.247665  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:51.248141  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:51.248175  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:51.248098  178111 retry.go:31] will retry after 4.234068925s: waiting for machine to come up
	I1213 00:08:51.052574  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:51.052700  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:51.069491  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:51.553152  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:51.553234  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:51.565331  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:52.052984  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:52.053064  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:52.064748  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:52.552257  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:52.552362  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:52.563626  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:53.053196  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:53.053287  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:53.064273  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:53.552319  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:53.552423  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:53.563587  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:54.053227  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:54.053331  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:54.065636  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:54.530249  177122 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:08:54.530301  177122 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:08:54.530330  177122 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:08:54.530424  177122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:08:54.570200  177122 cri.go:89] found id: ""
	I1213 00:08:54.570275  177122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:08:54.586722  177122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:08:54.596240  177122 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:08:54.596313  177122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:08:54.605202  177122 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:08:54.605226  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:54.718619  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:55.483563  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:55.483985  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:55.484024  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:55.483927  178111 retry.go:31] will retry after 5.446962632s: waiting for machine to come up
	I1213 00:08:55.944250  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.225592219s)
	I1213 00:08:55.944282  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.132294  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.214859  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.297313  177122 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:08:56.297421  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:56.315946  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:56.830228  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:57.329695  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:57.830336  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.329610  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.829933  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.853978  177122 api_server.go:72] duration metric: took 2.556667404s to wait for apiserver process to appear ...
	I1213 00:08:58.854013  177122 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:08:58.854054  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:02.161624  177409 start.go:369] acquired machines lock for "default-k8s-diff-port-743278" in 4m22.024178516s
	I1213 00:09:02.161693  177409 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:09:02.161704  177409 fix.go:54] fixHost starting: 
	I1213 00:09:02.162127  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:02.162174  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:02.179045  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41749
	I1213 00:09:02.179554  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:02.180099  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:02.180131  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:02.180461  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:02.180658  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:02.180795  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:02.182459  177409 fix.go:102] recreateIfNeeded on default-k8s-diff-port-743278: state=Stopped err=<nil>
	I1213 00:09:02.182498  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	W1213 00:09:02.182657  177409 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:09:02.184934  177409 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-743278" ...
	I1213 00:09:00.933522  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.934020  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has current primary IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.934046  177307 main.go:141] libmachine: (no-preload-143586) Found IP for machine: 192.168.50.181
	I1213 00:09:00.934058  177307 main.go:141] libmachine: (no-preload-143586) Reserving static IP address...
	I1213 00:09:00.934538  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "no-preload-143586", mac: "52:54:00:4d:da:7b", ip: "192.168.50.181"} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:00.934573  177307 main.go:141] libmachine: (no-preload-143586) DBG | skip adding static IP to network mk-no-preload-143586 - found existing host DHCP lease matching {name: "no-preload-143586", mac: "52:54:00:4d:da:7b", ip: "192.168.50.181"}
	I1213 00:09:00.934592  177307 main.go:141] libmachine: (no-preload-143586) Reserved static IP address: 192.168.50.181
	I1213 00:09:00.934601  177307 main.go:141] libmachine: (no-preload-143586) Waiting for SSH to be available...
	I1213 00:09:00.934610  177307 main.go:141] libmachine: (no-preload-143586) DBG | Getting to WaitForSSH function...
	I1213 00:09:00.936830  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.937236  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:00.937283  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.937399  177307 main.go:141] libmachine: (no-preload-143586) DBG | Using SSH client type: external
	I1213 00:09:00.937421  177307 main.go:141] libmachine: (no-preload-143586) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa (-rw-------)
	I1213 00:09:00.937458  177307 main.go:141] libmachine: (no-preload-143586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:00.937473  177307 main.go:141] libmachine: (no-preload-143586) DBG | About to run SSH command:
	I1213 00:09:00.937485  177307 main.go:141] libmachine: (no-preload-143586) DBG | exit 0
	I1213 00:09:01.024658  177307 main.go:141] libmachine: (no-preload-143586) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:01.024996  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetConfigRaw
	I1213 00:09:01.025611  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:01.028062  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.028471  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.028509  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.028734  177307 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/config.json ...
	I1213 00:09:01.028955  177307 machine.go:88] provisioning docker machine ...
	I1213 00:09:01.028980  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:01.029193  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.029394  177307 buildroot.go:166] provisioning hostname "no-preload-143586"
	I1213 00:09:01.029409  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.029580  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.031949  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.032273  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.032305  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.032413  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.032599  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.032758  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.032877  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.033036  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.033377  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.033395  177307 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-143586 && echo "no-preload-143586" | sudo tee /etc/hostname
	I1213 00:09:01.157420  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-143586
	
	I1213 00:09:01.157461  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.160181  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.160498  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.160535  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.160654  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.160915  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.161104  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.161299  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.161469  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.161785  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.161811  177307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-143586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-143586/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-143586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:01.287746  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:01.287776  177307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:01.287835  177307 buildroot.go:174] setting up certificates
	I1213 00:09:01.287844  177307 provision.go:83] configureAuth start
	I1213 00:09:01.287857  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.288156  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:01.290754  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.291147  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.291179  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.291296  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.293643  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.294002  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.294034  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.294184  177307 provision.go:138] copyHostCerts
	I1213 00:09:01.294243  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:01.294256  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:01.294323  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:01.294441  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:01.294453  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:01.294489  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:01.294569  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:01.294578  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:01.294610  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:01.294683  177307 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.no-preload-143586 san=[192.168.50.181 192.168.50.181 localhost 127.0.0.1 minikube no-preload-143586]
	I1213 00:09:01.407742  177307 provision.go:172] copyRemoteCerts
	I1213 00:09:01.407823  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:01.407856  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.410836  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.411141  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.411170  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.411455  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.411698  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.411883  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.412056  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:01.501782  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:01.530009  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:01.555147  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1213 00:09:01.580479  177307 provision.go:86] duration metric: configureAuth took 292.598329ms
	I1213 00:09:01.580511  177307 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:01.580732  177307 config.go:182] Loaded profile config "no-preload-143586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:09:01.580835  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.583742  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.584241  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.584274  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.584581  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.584798  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.585004  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.585184  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.585429  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.585889  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.585928  177307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:01.909801  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:01.909855  177307 machine.go:91] provisioned docker machine in 880.876025ms
	I1213 00:09:01.909868  177307 start.go:300] post-start starting for "no-preload-143586" (driver="kvm2")
	I1213 00:09:01.909883  177307 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:01.909905  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:01.910311  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:01.910349  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.913247  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.913635  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.913669  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.913824  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.914044  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.914199  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.914349  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.005986  177307 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:02.011294  177307 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:02.011323  177307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:02.011403  177307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:02.011494  177307 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:02.011601  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:02.022942  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:02.044535  177307 start.go:303] post-start completed in 134.650228ms
	I1213 00:09:02.044569  177307 fix.go:56] fixHost completed within 24.639227496s
	I1213 00:09:02.044597  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.047115  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.047543  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.047573  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.047758  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.047986  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.048161  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.048340  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.048500  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:02.048803  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:02.048816  177307 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:02.161458  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426142.108795362
	
	I1213 00:09:02.161485  177307 fix.go:206] guest clock: 1702426142.108795362
	I1213 00:09:02.161496  177307 fix.go:219] Guest: 2023-12-13 00:09:02.108795362 +0000 UTC Remote: 2023-12-13 00:09:02.044573107 +0000 UTC m=+272.815740988 (delta=64.222255ms)
	I1213 00:09:02.161522  177307 fix.go:190] guest clock delta is within tolerance: 64.222255ms
	I1213 00:09:02.161529  177307 start.go:83] releasing machines lock for "no-preload-143586", held for 24.756225075s
	I1213 00:09:02.161560  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.161847  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:02.164980  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.165383  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.165406  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.165582  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166273  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166493  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166576  177307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:02.166621  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.166903  177307 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:02.166931  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.169526  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.169553  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.169895  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.169938  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.169978  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.170000  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.170183  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.170282  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.170344  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.170473  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.170480  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.170603  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.170653  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.170804  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.281372  177307 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:02.288798  177307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:02.432746  177307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:02.441453  177307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:02.441539  177307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:02.456484  177307 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:02.456512  177307 start.go:475] detecting cgroup driver to use...
	I1213 00:09:02.456578  177307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:02.473267  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:02.485137  177307 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:02.485226  177307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:02.497728  177307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:02.510592  177307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:02.657681  177307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:02.791382  177307 docker.go:219] disabling docker service ...
	I1213 00:09:02.791476  177307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:02.804977  177307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:02.817203  177307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:02.927181  177307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:03.037010  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:03.050235  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:03.068944  177307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:09:03.069048  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.078813  177307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:03.078975  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.089064  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.098790  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.109679  177307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:03.120686  177307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:03.128767  177307 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:03.128820  177307 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:03.141210  177307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:03.149602  177307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:03.254618  177307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:03.434005  177307 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:03.434097  177307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:03.440391  177307 start.go:543] Will wait 60s for crictl version
	I1213 00:09:03.440481  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:03.445570  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:03.492155  177307 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:03.492240  177307 ssh_runner.go:195] Run: crio --version
	I1213 00:09:03.549854  177307 ssh_runner.go:195] Run: crio --version
	I1213 00:09:03.605472  177307 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1213 00:09:03.606678  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:03.610326  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:03.610753  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:03.610789  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:03.611019  177307 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:03.616608  177307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:03.632258  177307 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1213 00:09:03.632317  177307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:03.672640  177307 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1213 00:09:03.672666  177307 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 00:09:03.672723  177307 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:03.672772  177307 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.672774  177307 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.672820  177307 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.673002  177307 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1213 00:09:03.673032  177307 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.673038  177307 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.673094  177307 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.674386  177307 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.674433  177307 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1213 00:09:03.674505  177307 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:03.674648  177307 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.674774  177307 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.674822  177307 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.674864  177307 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.675103  177307 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.808980  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.812271  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1213 00:09:03.827742  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.828695  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.831300  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.846041  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.850598  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.908323  177307 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1213 00:09:03.908378  177307 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.908458  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.122878  177307 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1213 00:09:04.122930  177307 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:04.122955  177307 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1213 00:09:04.123115  177307 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:04.123132  177307 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1213 00:09:04.123164  177307 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:04.122988  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123203  177307 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1213 00:09:04.123230  177307 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:04.123245  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:04.123267  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123065  177307 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1213 00:09:04.123304  177307 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:04.123311  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123338  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123201  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.135289  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:04.139046  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:04.206020  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:04.206025  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.206195  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.206291  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:04.206422  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:04.247875  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1213 00:09:04.248003  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:04.248126  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1213 00:09:04.248193  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:02.719708  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:02.719761  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:02.719779  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:02.780571  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:02.780621  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:03.281221  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:03.290375  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:03.290413  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:03.781510  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:03.788285  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:03.788314  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:04.280872  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:04.288043  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 200:
	ok
	I1213 00:09:04.299772  177122 api_server.go:141] control plane version: v1.28.4
	I1213 00:09:04.299808  177122 api_server.go:131] duration metric: took 5.445787793s to wait for apiserver health ...
	I1213 00:09:04.299821  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:09:04.299830  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:04.301759  177122 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:02.186420  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Start
	I1213 00:09:02.186584  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring networks are active...
	I1213 00:09:02.187464  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring network default is active
	I1213 00:09:02.187836  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring network mk-default-k8s-diff-port-743278 is active
	I1213 00:09:02.188238  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Getting domain xml...
	I1213 00:09:02.188979  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Creating domain...
	I1213 00:09:03.516121  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting to get IP...
	I1213 00:09:03.517461  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.518001  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.518058  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:03.517966  178294 retry.go:31] will retry after 198.440266ms: waiting for machine to come up
	I1213 00:09:03.718554  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.718808  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.718846  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:03.718804  178294 retry.go:31] will retry after 319.889216ms: waiting for machine to come up
	I1213 00:09:04.040334  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.040806  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.040956  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:04.040869  178294 retry.go:31] will retry after 465.804275ms: waiting for machine to come up
	I1213 00:09:04.508751  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.509133  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.509237  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:04.509181  178294 retry.go:31] will retry after 609.293222ms: waiting for machine to come up
	I1213 00:09:04.303497  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:04.332773  177122 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:04.373266  177122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:04.384737  177122 system_pods.go:59] 9 kube-system pods found
	I1213 00:09:04.384791  177122 system_pods.go:61] "coredns-5dd5756b68-5vm25" [83fb4b19-82a2-42eb-b4df-6fd838fb8848] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:04.384805  177122 system_pods.go:61] "coredns-5dd5756b68-6mfmr" [e9598d8f-e497-4725-8eca-7fe0e7c2c2f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:04.384820  177122 system_pods.go:61] "etcd-embed-certs-335807" [cf066481-3312-4fce-8e29-e00a0177f188] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:04.384833  177122 system_pods.go:61] "kube-apiserver-embed-certs-335807" [0a545be1-8bb8-425a-889e-5ee1293e0bdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:04.384847  177122 system_pods.go:61] "kube-controller-manager-embed-certs-335807" [fd7ec770-5008-46f9-9f41-122e56baf2e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:04.384862  177122 system_pods.go:61] "kube-proxy-k8n7r" [df8cefdc-7c91-40e6-8949-ba413fd75b28] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:04.384874  177122 system_pods.go:61] "kube-scheduler-embed-certs-335807" [d2431157-640c-49e6-a83d-37cac6be1c50] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:04.384883  177122 system_pods.go:61] "metrics-server-57f55c9bc5-fx5pd" [8aa6fc5a-5649-47b2-a7de-3cabfd1515a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:04.384899  177122 system_pods.go:61] "storage-provisioner" [02026bc0-4e03-4747-ad77-052f2911efe1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:04.384909  177122 system_pods.go:74] duration metric: took 11.614377ms to wait for pod list to return data ...
	I1213 00:09:04.384928  177122 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:04.389533  177122 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:04.389578  177122 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:04.389594  177122 node_conditions.go:105] duration metric: took 4.657548ms to run NodePressure ...
	I1213 00:09:04.389622  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:04.771105  177122 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:04.778853  177122 kubeadm.go:787] kubelet initialised
	I1213 00:09:04.778886  177122 kubeadm.go:788] duration metric: took 7.74816ms waiting for restarted kubelet to initialise ...
	I1213 00:09:04.778898  177122 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:04.795344  177122 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:04.323893  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1213 00:09:04.323901  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1213 00:09:04.324122  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.324168  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.324006  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:04.324031  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:04.324300  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:04.324336  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:04.324067  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1213 00:09:04.324096  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1213 00:09:04.324100  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:04.597566  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:07.626684  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.302476413s)
	I1213 00:09:07.626718  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1213 00:09:07.626754  177307 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:07.626784  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (3.302394961s)
	I1213 00:09:07.626821  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1213 00:09:07.626824  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.302508593s)
	I1213 00:09:07.626859  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1213 00:09:07.626833  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:07.626882  177307 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.029282623s)
	I1213 00:09:07.626755  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.302393062s)
	I1213 00:09:07.626939  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1213 00:09:07.626975  177307 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1213 00:09:07.627010  177307 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:07.627072  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:05.120691  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.121251  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.121285  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:05.121183  178294 retry.go:31] will retry after 488.195845ms: waiting for machine to come up
	I1213 00:09:05.610815  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.611226  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.611258  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:05.611167  178294 retry.go:31] will retry after 705.048097ms: waiting for machine to come up
	I1213 00:09:06.317891  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:06.318329  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:06.318353  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:06.318278  178294 retry.go:31] will retry after 788.420461ms: waiting for machine to come up
	I1213 00:09:07.108179  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:07.108736  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:07.108769  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:07.108696  178294 retry.go:31] will retry after 1.331926651s: waiting for machine to come up
	I1213 00:09:08.442609  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:08.443087  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:08.443114  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:08.443032  178294 retry.go:31] will retry after 1.180541408s: waiting for machine to come up
	I1213 00:09:09.625170  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:09.625722  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:09.625753  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:09.625653  178294 retry.go:31] will retry after 1.866699827s: waiting for machine to come up
	I1213 00:09:06.828008  177122 pod_ready.go:102] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:09.322889  177122 pod_ready.go:102] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:09.822883  177122 pod_ready.go:92] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:09.822913  177122 pod_ready.go:81] duration metric: took 5.027534973s waiting for pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.822927  177122 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.828990  177122 pod_ready.go:92] pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:09.829018  177122 pod_ready.go:81] duration metric: took 6.083345ms waiting for pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.829035  177122 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.803403  177307 ssh_runner.go:235] Completed: which crictl: (2.176302329s)
	I1213 00:09:09.803541  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:09.803468  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.176578633s)
	I1213 00:09:09.803602  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1213 00:09:09.803634  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:09.803673  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:09.851557  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 00:09:09.851690  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:12.107222  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.303514888s)
	I1213 00:09:12.107284  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1213 00:09:12.107292  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.255575693s)
	I1213 00:09:12.107308  177307 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:12.107336  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1213 00:09:12.107363  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:11.494563  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:11.495148  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:11.495182  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:11.495076  178294 retry.go:31] will retry after 2.859065831s: waiting for machine to come up
	I1213 00:09:14.356328  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:14.356778  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:14.356814  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:14.356719  178294 retry.go:31] will retry after 3.506628886s: waiting for machine to come up
	I1213 00:09:11.849447  177122 pod_ready.go:102] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:14.349299  177122 pod_ready.go:102] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:14.853963  177122 pod_ready.go:92] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:14.853989  177122 pod_ready.go:81] duration metric: took 5.024945989s waiting for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.854001  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.861663  177122 pod_ready.go:92] pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:14.861685  177122 pod_ready.go:81] duration metric: took 7.676036ms waiting for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.861697  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:16.223090  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.115697846s)
	I1213 00:09:16.223134  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1213 00:09:16.223165  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:16.223211  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:17.473407  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.25017316s)
	I1213 00:09:17.473435  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1213 00:09:17.473476  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:17.473552  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:17.864739  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:17.865213  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:17.865237  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:17.865171  178294 retry.go:31] will retry after 2.94932643s: waiting for machine to come up
	I1213 00:09:16.884215  177122 pod_ready.go:102] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:17.383872  177122 pod_ready.go:92] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.383906  177122 pod_ready.go:81] duration metric: took 2.52219538s waiting for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.383928  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k8n7r" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.389464  177122 pod_ready.go:92] pod "kube-proxy-k8n7r" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.389482  177122 pod_ready.go:81] duration metric: took 5.547172ms waiting for pod "kube-proxy-k8n7r" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.389490  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.419020  177122 pod_ready.go:92] pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.419047  177122 pod_ready.go:81] duration metric: took 29.549704ms waiting for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.419056  177122 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:19.730210  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:22.069281  176813 start.go:369] acquired machines lock for "old-k8s-version-508612" in 1m3.72259979s
	I1213 00:09:22.069359  176813 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:09:22.069367  176813 fix.go:54] fixHost starting: 
	I1213 00:09:22.069812  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:22.069851  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:22.088760  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I1213 00:09:22.089211  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:22.089766  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:09:22.089795  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:22.090197  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:22.090396  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:22.090574  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:09:22.092039  176813 fix.go:102] recreateIfNeeded on old-k8s-version-508612: state=Stopped err=<nil>
	I1213 00:09:22.092064  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	W1213 00:09:22.092241  176813 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:09:22.094310  176813 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-508612" ...
	I1213 00:09:20.817420  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.817794  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has current primary IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.817833  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Found IP for machine: 192.168.72.144
	I1213 00:09:20.817870  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Reserving static IP address...
	I1213 00:09:20.818250  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-743278", mac: "52:54:00:d1:a8:22", ip: "192.168.72.144"} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.818272  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Reserved static IP address: 192.168.72.144
	I1213 00:09:20.818286  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | skip adding static IP to network mk-default-k8s-diff-port-743278 - found existing host DHCP lease matching {name: "default-k8s-diff-port-743278", mac: "52:54:00:d1:a8:22", ip: "192.168.72.144"}
	I1213 00:09:20.818298  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Getting to WaitForSSH function...
	I1213 00:09:20.818312  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for SSH to be available...
	I1213 00:09:20.820093  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.820378  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.820409  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.820525  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Using SSH client type: external
	I1213 00:09:20.820552  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa (-rw-------)
	I1213 00:09:20.820587  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:20.820618  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | About to run SSH command:
	I1213 00:09:20.820632  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | exit 0
	I1213 00:09:20.907942  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:20.908280  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetConfigRaw
	I1213 00:09:20.909042  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:20.911222  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.911544  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.911569  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.911826  177409 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/config.json ...
	I1213 00:09:20.912048  177409 machine.go:88] provisioning docker machine ...
	I1213 00:09:20.912071  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:20.912284  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:20.912425  177409 buildroot.go:166] provisioning hostname "default-k8s-diff-port-743278"
	I1213 00:09:20.912460  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:20.912585  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:20.914727  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.915081  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.915113  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.915257  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:20.915449  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:20.915562  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:20.915671  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:20.915842  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:20.916275  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:20.916293  177409 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-743278 && echo "default-k8s-diff-port-743278" | sudo tee /etc/hostname
	I1213 00:09:21.042561  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-743278
	
	I1213 00:09:21.042606  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.045461  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.045809  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.045851  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.045957  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.046181  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.046350  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.046508  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.046685  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.047008  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.047034  177409 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-743278' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-743278/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-743278' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:21.169124  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:21.169155  177409 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:21.169175  177409 buildroot.go:174] setting up certificates
	I1213 00:09:21.169185  177409 provision.go:83] configureAuth start
	I1213 00:09:21.169194  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:21.169502  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:21.172929  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.173329  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.173361  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.173540  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.175847  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.176249  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.176277  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.176447  177409 provision.go:138] copyHostCerts
	I1213 00:09:21.176509  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:21.176525  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:21.176584  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:21.176677  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:21.176744  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:21.176775  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:21.176841  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:21.176848  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:21.176866  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:21.176922  177409 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-743278 san=[192.168.72.144 192.168.72.144 localhost 127.0.0.1 minikube default-k8s-diff-port-743278]
	I1213 00:09:21.314924  177409 provision.go:172] copyRemoteCerts
	I1213 00:09:21.315003  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:21.315032  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.318149  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.318550  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.318582  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.318787  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.319005  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.319191  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.319346  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:21.409699  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:21.438626  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1213 00:09:21.468607  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:21.495376  177409 provision.go:86] duration metric: configureAuth took 326.171872ms
	I1213 00:09:21.495403  177409 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:21.495621  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:09:21.495700  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.498778  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.499247  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.499279  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.499495  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.499710  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.499877  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.500098  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.500285  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.500728  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.500751  177409 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:21.822577  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:21.822606  177409 machine.go:91] provisioned docker machine in 910.541774ms
	I1213 00:09:21.822619  177409 start.go:300] post-start starting for "default-k8s-diff-port-743278" (driver="kvm2")
	I1213 00:09:21.822632  177409 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:21.822659  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:21.823015  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:21.823044  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.825948  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.826367  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.826403  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.826577  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.826789  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.826965  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.827146  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:21.915743  177409 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:21.920142  177409 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:21.920169  177409 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:21.920249  177409 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:21.920343  177409 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:21.920474  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:21.929896  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:21.951854  177409 start.go:303] post-start completed in 129.217251ms
	I1213 00:09:21.951880  177409 fix.go:56] fixHost completed within 19.790175647s
	I1213 00:09:21.951904  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.954710  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.955103  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.955137  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.955352  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.955533  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.955685  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.955814  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.955980  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.956485  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.956505  177409 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:22.069059  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426162.011062386
	
	I1213 00:09:22.069089  177409 fix.go:206] guest clock: 1702426162.011062386
	I1213 00:09:22.069100  177409 fix.go:219] Guest: 2023-12-13 00:09:22.011062386 +0000 UTC Remote: 2023-12-13 00:09:21.951884769 +0000 UTC m=+281.971624237 (delta=59.177617ms)
	I1213 00:09:22.069142  177409 fix.go:190] guest clock delta is within tolerance: 59.177617ms
	I1213 00:09:22.069153  177409 start.go:83] releasing machines lock for "default-k8s-diff-port-743278", held for 19.907486915s
	I1213 00:09:22.069191  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.069478  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:22.072371  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.072761  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.072794  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.072922  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073441  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073605  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073670  177409 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:22.073719  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:22.073821  177409 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:22.073841  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:22.076233  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076550  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076703  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.076733  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076874  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:22.077050  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.077080  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.077052  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:22.077227  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:22.077303  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:22.077540  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:22.077630  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:22.077714  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:22.077851  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:22.188131  177409 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:22.193896  177409 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:22.339227  177409 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:22.346292  177409 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:22.346366  177409 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:22.361333  177409 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:22.361364  177409 start.go:475] detecting cgroup driver to use...
	I1213 00:09:22.361438  177409 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:22.374698  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:22.387838  177409 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:22.387897  177409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:22.402969  177409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:22.417038  177409 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:22.533130  177409 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:22.665617  177409 docker.go:219] disabling docker service ...
	I1213 00:09:22.665690  177409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:22.681327  177409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:22.692842  177409 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:22.816253  177409 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:22.951988  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:22.967607  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:22.985092  177409 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:09:22.985158  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:22.994350  177409 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:22.994403  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.003372  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.012176  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.021215  177409 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:23.031105  177409 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:23.039486  177409 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:23.039552  177409 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:23.053085  177409 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:23.062148  177409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:23.182275  177409 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:23.357901  177409 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:23.357991  177409 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:23.364148  177409 start.go:543] Will wait 60s for crictl version
	I1213 00:09:23.364225  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:09:23.368731  177409 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:23.408194  177409 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:23.408288  177409 ssh_runner.go:195] Run: crio --version
	I1213 00:09:23.461483  177409 ssh_runner.go:195] Run: crio --version
	I1213 00:09:23.513553  177409 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1213 00:09:20.148999  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.675412499s)
	I1213 00:09:20.149037  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1213 00:09:20.149073  177307 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:20.149116  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:21.101559  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 00:09:21.101608  177307 cache_images.go:123] Successfully loaded all cached images
	I1213 00:09:21.101615  177307 cache_images.go:92] LoadImages completed in 17.428934706s
	I1213 00:09:21.101694  177307 ssh_runner.go:195] Run: crio config
	I1213 00:09:21.159955  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:09:21.159978  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:21.159999  177307 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:21.160023  177307 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.181 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-143586 NodeName:no-preload-143586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:09:21.160198  177307 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-143586"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:21.160303  177307 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-143586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-143586 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:09:21.160378  177307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1213 00:09:21.170615  177307 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:21.170701  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:21.180228  177307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1213 00:09:21.198579  177307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1213 00:09:21.215096  177307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1213 00:09:21.233288  177307 ssh_runner.go:195] Run: grep 192.168.50.181	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:21.236666  177307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:21.248811  177307 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586 for IP: 192.168.50.181
	I1213 00:09:21.248847  177307 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:21.249007  177307 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:21.249058  177307 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:21.249154  177307 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.key
	I1213 00:09:21.249238  177307 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.key.8f5c2e66
	I1213 00:09:21.249291  177307 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.key
	I1213 00:09:21.249427  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:21.249468  177307 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:21.249484  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:21.249523  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:21.249559  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:21.249591  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:21.249642  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:21.250517  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:21.276697  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:21.299356  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:21.322849  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:21.348145  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:21.370885  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:21.393257  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:21.418643  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:21.446333  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:21.476374  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:21.506662  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:21.530653  177307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:21.555129  177307 ssh_runner.go:195] Run: openssl version
	I1213 00:09:21.561174  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:21.571372  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.575988  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.576053  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.581633  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:21.590564  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:21.599910  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.604113  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.604160  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.609303  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:21.619194  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:21.628171  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.632419  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.632494  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.638310  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:21.648369  177307 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:21.653143  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:21.659543  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:21.665393  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:21.670855  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:21.676290  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:21.681864  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:21.688162  177307 kubeadm.go:404] StartCluster: {Name:no-preload-143586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-143586 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:21.688243  177307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:21.688280  177307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:21.727451  177307 cri.go:89] found id: ""
	I1213 00:09:21.727536  177307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:21.739044  177307 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:21.739066  177307 kubeadm.go:636] restartCluster start
	I1213 00:09:21.739124  177307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:21.747328  177307 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:21.748532  177307 kubeconfig.go:92] found "no-preload-143586" server: "https://192.168.50.181:8443"
	I1213 00:09:21.751029  177307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:21.759501  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:21.759546  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:21.771029  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:21.771048  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:21.771095  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:21.782184  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:22.282507  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:22.282588  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:22.294105  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:22.783207  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:22.783266  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:22.796776  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.282325  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:23.282395  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:23.295974  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.782516  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:23.782615  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:23.797912  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.514911  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:23.517973  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:23.518335  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:23.518366  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:23.518566  177409 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:23.523522  177409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:23.537195  177409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:09:23.537261  177409 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:23.579653  177409 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1213 00:09:23.579729  177409 ssh_runner.go:195] Run: which lz4
	I1213 00:09:23.583956  177409 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:09:23.588686  177409 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:09:23.588720  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1213 00:09:22.095647  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Start
	I1213 00:09:22.095821  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring networks are active...
	I1213 00:09:22.096548  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring network default is active
	I1213 00:09:22.096936  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring network mk-old-k8s-version-508612 is active
	I1213 00:09:22.097366  176813 main.go:141] libmachine: (old-k8s-version-508612) Getting domain xml...
	I1213 00:09:22.097939  176813 main.go:141] libmachine: (old-k8s-version-508612) Creating domain...
	I1213 00:09:23.423128  176813 main.go:141] libmachine: (old-k8s-version-508612) Waiting to get IP...
	I1213 00:09:23.424090  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:23.424606  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:23.424676  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:23.424588  178471 retry.go:31] will retry after 260.416347ms: waiting for machine to come up
	I1213 00:09:23.687268  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:23.687867  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:23.687902  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:23.687814  178471 retry.go:31] will retry after 377.709663ms: waiting for machine to come up
	I1213 00:09:24.067588  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.068249  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.068277  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.068177  178471 retry.go:31] will retry after 480.876363ms: waiting for machine to come up
	I1213 00:09:24.550715  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.551244  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.551278  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.551191  178471 retry.go:31] will retry after 389.885819ms: waiting for machine to come up
	I1213 00:09:24.942898  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.943495  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.943526  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.943443  178471 retry.go:31] will retry after 532.578432ms: waiting for machine to come up
	I1213 00:09:25.478278  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:25.478810  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:25.478845  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:25.478781  178471 retry.go:31] will retry after 599.649827ms: waiting for machine to come up
	I1213 00:09:22.230086  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:24.729105  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:24.282598  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:24.282708  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:24.298151  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:24.782530  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:24.782639  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:24.798661  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.283235  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:25.283393  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:25.297662  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.783319  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:25.783436  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:25.797129  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.282666  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:26.282789  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:26.295674  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.783065  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:26.783147  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:26.794192  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:27.282703  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:27.282775  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:27.294823  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:27.782891  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:27.782975  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:27.798440  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:28.282826  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:28.282909  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:28.293752  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:28.782264  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:28.782325  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:28.793986  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.524765  177409 crio.go:444] Took 1.940853 seconds to copy over tarball
	I1213 00:09:25.524843  177409 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:09:28.426493  177409 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.901618536s)
	I1213 00:09:28.426522  177409 crio.go:451] Took 2.901730 seconds to extract the tarball
	I1213 00:09:28.426533  177409 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:09:28.467170  177409 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:28.520539  177409 crio.go:496] all images are preloaded for cri-o runtime.
	I1213 00:09:28.520567  177409 cache_images.go:84] Images are preloaded, skipping loading
	I1213 00:09:28.520654  177409 ssh_runner.go:195] Run: crio config
	I1213 00:09:28.588320  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:09:28.588348  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:28.588370  177409 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:28.588395  177409 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-743278 NodeName:default-k8s-diff-port-743278 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:09:28.588593  177409 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-743278"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:28.588687  177409 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-743278 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1213 00:09:28.588755  177409 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1213 00:09:28.597912  177409 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:28.597987  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:28.608324  177409 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1213 00:09:28.627102  177409 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:09:28.646837  177409 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1213 00:09:28.664534  177409 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:28.668580  177409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:28.680736  177409 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278 for IP: 192.168.72.144
	I1213 00:09:28.680777  177409 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:28.680971  177409 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:28.681037  177409 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:28.681140  177409 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.key
	I1213 00:09:28.681234  177409 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.key.1dd7f3f2
	I1213 00:09:28.681301  177409 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.key
	I1213 00:09:28.681480  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:28.681525  177409 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:28.681543  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:28.681587  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:28.681636  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:28.681681  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:28.681743  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:28.682710  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:28.707852  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:28.732792  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:28.755545  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:28.779880  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:28.805502  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:28.829894  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:28.853211  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:28.877291  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:28.899870  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:28.922141  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:28.945634  177409 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:28.962737  177409 ssh_runner.go:195] Run: openssl version
	I1213 00:09:28.968869  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:28.980535  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.985219  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.985284  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.990911  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:29.001595  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:29.012408  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.017644  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.017760  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.023914  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:29.034793  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:29.045825  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.050538  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.050584  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.057322  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:29.067993  177409 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:29.072782  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:29.078806  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:29.084744  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:29.090539  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:29.096734  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:29.102729  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:29.108909  177409 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:29.109022  177409 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:29.109095  177409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:29.158003  177409 cri.go:89] found id: ""
	I1213 00:09:29.158100  177409 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:29.169464  177409 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:29.169500  177409 kubeadm.go:636] restartCluster start
	I1213 00:09:29.169555  177409 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:29.180347  177409 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.181609  177409 kubeconfig.go:92] found "default-k8s-diff-port-743278" server: "https://192.168.72.144:8444"
	I1213 00:09:29.184377  177409 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:29.193593  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.193645  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.205447  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.205465  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.205519  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.221169  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.721729  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.721835  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.735942  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.080407  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:26.081034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:26.081061  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:26.080973  178471 retry.go:31] will retry after 1.103545811s: waiting for machine to come up
	I1213 00:09:27.186673  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:27.187208  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:27.187241  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:27.187152  178471 retry.go:31] will retry after 977.151221ms: waiting for machine to come up
	I1213 00:09:28.165799  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:28.166219  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:28.166257  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:28.166166  178471 retry.go:31] will retry after 1.27451971s: waiting for machine to come up
	I1213 00:09:29.441683  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:29.442203  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:29.442240  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:29.442122  178471 retry.go:31] will retry after 1.620883976s: waiting for machine to come up
	I1213 00:09:26.733297  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:29.624623  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:29.282975  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.621544  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.632749  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.783112  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.783214  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.794919  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.282457  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.282528  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.293852  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.782400  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.782499  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.797736  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.282276  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.282367  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.298115  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.759957  177307 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:09:31.760001  177307 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:09:31.760013  177307 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:09:31.760078  177307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:31.799045  177307 cri.go:89] found id: ""
	I1213 00:09:31.799146  177307 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:09:31.813876  177307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:09:31.823305  177307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:09:31.823382  177307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:31.831741  177307 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:31.831767  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:31.961871  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:32.826330  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.045107  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.119065  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.187783  177307 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:09:33.187887  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:33.217142  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:33.735695  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:34.236063  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:30.221906  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.230723  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.243849  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.721380  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.721492  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.734401  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.222026  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.222150  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.235400  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.722107  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.722189  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.735415  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:32.222216  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:32.222365  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:32.238718  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:32.721270  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:32.721389  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:32.735677  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:33.222261  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:33.222329  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:33.243918  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:33.721349  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:33.721438  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:33.738138  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:34.221645  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:34.221748  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:34.238845  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:34.721320  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:34.721390  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:34.738271  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.065065  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:31.065494  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:31.065528  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:31.065436  178471 retry.go:31] will retry after 2.452686957s: waiting for machine to come up
	I1213 00:09:33.519937  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:33.520505  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:33.520537  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:33.520468  178471 retry.go:31] will retry after 2.830999713s: waiting for machine to come up
	I1213 00:09:31.729101  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:34.229143  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:34.735218  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.235570  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.736120  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.764916  177307 api_server.go:72] duration metric: took 2.577131698s to wait for apiserver process to appear ...
	I1213 00:09:35.764942  177307 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:09:35.764971  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.765820  177307 api_server.go:269] stopped: https://192.168.50.181:8443/healthz: Get "https://192.168.50.181:8443/healthz": dial tcp 192.168.50.181:8443: connect: connection refused
	I1213 00:09:35.765860  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.766257  177307 api_server.go:269] stopped: https://192.168.50.181:8443/healthz: Get "https://192.168.50.181:8443/healthz": dial tcp 192.168.50.181:8443: connect: connection refused
	I1213 00:09:36.266842  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.221935  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:35.222069  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:35.240609  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:35.721801  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:35.721965  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:35.765295  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:36.221944  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:36.222021  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:36.238211  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:36.721750  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:36.721830  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:36.736765  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:37.221936  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:37.222185  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:37.238002  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:37.721304  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:37.721385  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:37.734166  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:38.221603  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:38.221701  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:38.235174  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:38.721704  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:38.721794  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:38.735963  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:39.193664  177409 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:09:39.193713  177409 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:09:39.193727  177409 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:09:39.193787  177409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:39.238262  177409 cri.go:89] found id: ""
	I1213 00:09:39.238336  177409 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:09:39.258625  177409 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:09:39.271127  177409 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:09:39.271196  177409 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:39.280870  177409 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:39.280906  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:39.399746  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:36.353967  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:36.354453  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:36.354481  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:36.354415  178471 retry.go:31] will retry after 2.983154328s: waiting for machine to come up
	I1213 00:09:39.341034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:39.341497  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:39.341526  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:39.341462  178471 retry.go:31] will retry after 3.436025657s: waiting for machine to come up
	I1213 00:09:36.230811  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:38.729730  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:40.732654  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:39.693843  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:39.693877  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:39.693896  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:39.767118  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:39.767153  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:39.767169  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:39.787684  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:39.787725  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:40.267069  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:40.272416  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:40.272464  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:40.766651  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:40.799906  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:40.799942  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:41.266411  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:41.271259  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 200:
	ok
	I1213 00:09:41.278691  177307 api_server.go:141] control plane version: v1.29.0-rc.2
	I1213 00:09:41.278715  177307 api_server.go:131] duration metric: took 5.51376527s to wait for apiserver health ...
	I1213 00:09:41.278725  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:09:41.278732  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:41.280473  177307 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:41.281924  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:41.308598  177307 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:41.330367  177307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:41.342017  177307 system_pods.go:59] 8 kube-system pods found
	I1213 00:09:41.342048  177307 system_pods.go:61] "coredns-76f75df574-87nc6" [829c7a44-85a0-4ed0-b98a-b5016aa04b97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:41.342054  177307 system_pods.go:61] "etcd-no-preload-143586" [b50e57af-530a-4689-bf42-a9f74fa6bea1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:41.342065  177307 system_pods.go:61] "kube-apiserver-no-preload-143586" [3aed4b84-e029-433a-8394-f99608b52edd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:41.342071  177307 system_pods.go:61] "kube-controller-manager-no-preload-143586" [f88e182a-0a81-4c7b-b2b3-d6911baf340f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:41.342080  177307 system_pods.go:61] "kube-proxy-8k9x6" [a71d2257-2012-4d0d-948d-d69c0c04bd2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:41.342086  177307 system_pods.go:61] "kube-scheduler-no-preload-143586" [dfb7b176-fbf8-4542-890f-1eba0f699b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:41.342098  177307 system_pods.go:61] "metrics-server-57f55c9bc5-px5lm" [25b8b500-0ad0-4da3-8f7f-d8c46a848e8a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:41.342106  177307 system_pods.go:61] "storage-provisioner" [bb18a95a-ed99-43f7-bc6f-333e0b86cacc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:41.342114  177307 system_pods.go:74] duration metric: took 11.726461ms to wait for pod list to return data ...
	I1213 00:09:41.342132  177307 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:41.345985  177307 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:41.346011  177307 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:41.346021  177307 node_conditions.go:105] duration metric: took 3.884209ms to run NodePressure ...
	I1213 00:09:41.346038  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:41.682789  177307 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:41.690867  177307 kubeadm.go:787] kubelet initialised
	I1213 00:09:41.690892  177307 kubeadm.go:788] duration metric: took 8.076203ms waiting for restarted kubelet to initialise ...
	I1213 00:09:41.690902  177307 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:41.698622  177307 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-87nc6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:43.720619  177307 pod_ready.go:102] pod "coredns-76f75df574-87nc6" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:40.471390  177409 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.071602244s)
	I1213 00:09:40.471425  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.665738  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.786290  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.859198  177409 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:09:40.859302  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:40.887488  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:41.406398  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:41.906653  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:42.405784  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:42.906462  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:43.406489  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:43.432933  177409 api_server.go:72] duration metric: took 2.573735322s to wait for apiserver process to appear ...
	I1213 00:09:43.432975  177409 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:09:43.432997  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:43.433588  177409 api_server.go:269] stopped: https://192.168.72.144:8444/healthz: Get "https://192.168.72.144:8444/healthz": dial tcp 192.168.72.144:8444: connect: connection refused
	I1213 00:09:43.433641  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:43.434089  177409 api_server.go:269] stopped: https://192.168.72.144:8444/healthz: Get "https://192.168.72.144:8444/healthz": dial tcp 192.168.72.144:8444: connect: connection refused
	I1213 00:09:43.934469  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:42.779498  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.779971  176813 main.go:141] libmachine: (old-k8s-version-508612) Found IP for machine: 192.168.39.70
	I1213 00:09:42.779993  176813 main.go:141] libmachine: (old-k8s-version-508612) Reserving static IP address...
	I1213 00:09:42.780011  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has current primary IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.780466  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "old-k8s-version-508612", mac: "52:54:00:dd:da:91", ip: "192.168.39.70"} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.780504  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | skip adding static IP to network mk-old-k8s-version-508612 - found existing host DHCP lease matching {name: "old-k8s-version-508612", mac: "52:54:00:dd:da:91", ip: "192.168.39.70"}
	I1213 00:09:42.780524  176813 main.go:141] libmachine: (old-k8s-version-508612) Reserved static IP address: 192.168.39.70
	I1213 00:09:42.780547  176813 main.go:141] libmachine: (old-k8s-version-508612) Waiting for SSH to be available...
	I1213 00:09:42.780559  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Getting to WaitForSSH function...
	I1213 00:09:42.783019  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.783434  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.783482  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.783566  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Using SSH client type: external
	I1213 00:09:42.783598  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa (-rw-------)
	I1213 00:09:42.783638  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:42.783661  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | About to run SSH command:
	I1213 00:09:42.783681  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | exit 0
	I1213 00:09:42.885148  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:42.885690  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetConfigRaw
	I1213 00:09:42.886388  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:42.889440  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.889898  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.889937  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.890209  176813 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/config.json ...
	I1213 00:09:42.890423  176813 machine.go:88] provisioning docker machine ...
	I1213 00:09:42.890444  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:42.890685  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:42.890874  176813 buildroot.go:166] provisioning hostname "old-k8s-version-508612"
	I1213 00:09:42.890899  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:42.891039  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:42.893678  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.894021  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.894051  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.894174  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:42.894391  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:42.894556  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:42.894720  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:42.894909  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:42.895383  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:42.895406  176813 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-508612 && echo "old-k8s-version-508612" | sudo tee /etc/hostname
	I1213 00:09:43.045290  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-508612
	
	I1213 00:09:43.045345  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.048936  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.049438  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.049476  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.049662  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.049877  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.050074  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.050231  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.050413  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:43.050888  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:43.050919  176813 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-508612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-508612/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-508612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:43.183021  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:43.183061  176813 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:43.183089  176813 buildroot.go:174] setting up certificates
	I1213 00:09:43.183102  176813 provision.go:83] configureAuth start
	I1213 00:09:43.183115  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:43.183467  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:43.186936  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.187409  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.187441  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.187620  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.190125  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.190560  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.190612  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.190775  176813 provision.go:138] copyHostCerts
	I1213 00:09:43.190842  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:43.190861  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:43.190936  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:43.191113  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:43.191126  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:43.191158  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:43.191245  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:43.191256  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:43.191284  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:43.191354  176813 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-508612 san=[192.168.39.70 192.168.39.70 localhost 127.0.0.1 minikube old-k8s-version-508612]
	I1213 00:09:43.321927  176813 provision.go:172] copyRemoteCerts
	I1213 00:09:43.321999  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:43.322039  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.325261  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.325653  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.325686  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.325920  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.326128  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.326300  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.326474  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:43.420656  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:43.445997  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 00:09:43.471466  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:43.500104  176813 provision.go:86] duration metric: configureAuth took 316.983913ms
	I1213 00:09:43.500137  176813 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:43.500380  176813 config.go:182] Loaded profile config "old-k8s-version-508612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1213 00:09:43.500554  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.503567  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.503994  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.504034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.504320  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.504551  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.504797  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.504978  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.505164  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:43.505640  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:43.505668  176813 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:43.859639  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:43.859723  176813 machine.go:91] provisioned docker machine in 969.28446ms
	I1213 00:09:43.859741  176813 start.go:300] post-start starting for "old-k8s-version-508612" (driver="kvm2")
	I1213 00:09:43.859754  176813 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:43.859781  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:43.860174  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:43.860207  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.863407  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.863903  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.863944  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.864142  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.864340  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.864604  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.864907  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:43.957616  176813 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:43.963381  176813 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:43.963413  176813 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:43.963489  176813 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:43.963594  176813 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:43.963710  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:43.972902  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:44.001469  176813 start.go:303] post-start completed in 141.706486ms
	I1213 00:09:44.001503  176813 fix.go:56] fixHost completed within 21.932134773s
	I1213 00:09:44.001532  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.004923  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.005334  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.005410  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.005545  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.005846  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.006067  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.006198  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.006401  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:44.006815  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:44.006841  176813 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:44.134363  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426184.079167065
	
	I1213 00:09:44.134389  176813 fix.go:206] guest clock: 1702426184.079167065
	I1213 00:09:44.134398  176813 fix.go:219] Guest: 2023-12-13 00:09:44.079167065 +0000 UTC Remote: 2023-12-13 00:09:44.001508908 +0000 UTC m=+368.244893563 (delta=77.658157ms)
	I1213 00:09:44.134434  176813 fix.go:190] guest clock delta is within tolerance: 77.658157ms
	I1213 00:09:44.134446  176813 start.go:83] releasing machines lock for "old-k8s-version-508612", held for 22.06510734s
	I1213 00:09:44.134469  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.134760  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:44.137820  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.138245  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.138275  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.138444  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.138957  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.139152  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.139229  176813 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:44.139300  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.139358  176813 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:44.139383  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.142396  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.142920  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.142981  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143041  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143197  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.143340  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.143473  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.143487  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.143505  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143633  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.143628  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:44.143786  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.143913  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.144041  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:44.235010  176813 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:44.263174  176813 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:44.424330  176813 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:44.433495  176813 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:44.433573  176813 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:44.454080  176813 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:44.454106  176813 start.go:475] detecting cgroup driver to use...
	I1213 00:09:44.454173  176813 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:44.482370  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:44.499334  176813 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:44.499429  176813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:44.516413  176813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:44.529636  176813 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:44.638215  176813 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:44.774229  176813 docker.go:219] disabling docker service ...
	I1213 00:09:44.774304  176813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:44.790414  176813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:44.804909  176813 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:44.938205  176813 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:45.069429  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:45.085783  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:45.105487  176813 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1213 00:09:45.105558  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.117662  176813 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:45.117789  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.129560  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.139168  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.148466  176813 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:45.157626  176813 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:45.166608  176813 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:45.166675  176813 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:45.179666  176813 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:45.190356  176813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:45.366019  176813 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:45.549130  176813 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:45.549209  176813 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:45.554753  176813 start.go:543] Will wait 60s for crictl version
	I1213 00:09:45.554809  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:45.559452  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:45.605106  176813 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:45.605180  176813 ssh_runner.go:195] Run: crio --version
	I1213 00:09:45.654428  176813 ssh_runner.go:195] Run: crio --version
	I1213 00:09:45.711107  176813 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1213 00:09:45.712598  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:45.716022  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:45.716371  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:45.716405  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:45.716751  176813 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:45.722339  176813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:45.739528  176813 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1213 00:09:45.739594  176813 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:45.786963  176813 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1213 00:09:45.787044  176813 ssh_runner.go:195] Run: which lz4
	I1213 00:09:45.791462  176813 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:09:45.795923  176813 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:09:45.795952  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1213 00:09:43.228658  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:45.231385  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:45.721999  177307 pod_ready.go:92] pod "coredns-76f75df574-87nc6" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:45.722026  177307 pod_ready.go:81] duration metric: took 4.023377357s waiting for pod "coredns-76f75df574-87nc6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:45.722038  177307 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:47.744891  177307 pod_ready.go:102] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:48.255190  177307 pod_ready.go:92] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:48.255220  177307 pod_ready.go:81] duration metric: took 2.533174326s waiting for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.255233  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.263450  177307 pod_ready.go:92] pod "kube-apiserver-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:48.263477  177307 pod_ready.go:81] duration metric: took 8.236475ms waiting for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.263489  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.212975  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:48.213009  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:48.213033  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.303921  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:48.303963  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:48.435167  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.442421  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:48.442455  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:48.934740  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.941126  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:48.941161  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:49.434967  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:49.444960  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:49.445016  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:49.935234  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:49.941400  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I1213 00:09:49.951057  177409 api_server.go:141] control plane version: v1.28.4
	I1213 00:09:49.951094  177409 api_server.go:131] duration metric: took 6.518109828s to wait for apiserver health ...
	I1213 00:09:49.951105  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:09:49.951115  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:49.953198  177409 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:49.954914  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:49.989291  177409 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:47.527308  176813 crio.go:444] Took 1.735860 seconds to copy over tarball
	I1213 00:09:47.527390  176813 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:09:50.641162  176813 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113740813s)
	I1213 00:09:50.641195  176813 crio.go:451] Took 3.113856 seconds to extract the tarball
	I1213 00:09:50.641208  176813 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:09:50.683194  176813 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:50.729476  176813 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1213 00:09:50.729503  176813 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 00:09:50.729574  176813 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.729602  176813 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1213 00:09:50.729611  176813 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.729617  176813 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:50.729653  176813 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.729605  176813 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.729572  176813 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:50.729589  176813 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.730849  176813 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:50.730908  176813 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.730924  176813 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1213 00:09:50.730968  176813 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.730986  176813 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.730997  176813 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.730847  176813 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.731163  176813 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:47.235674  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:49.728030  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:50.051886  177409 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:50.069774  177409 system_pods.go:59] 8 kube-system pods found
	I1213 00:09:50.069817  177409 system_pods.go:61] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:50.069834  177409 system_pods.go:61] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:50.069849  177409 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:50.069862  177409 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:50.069875  177409 system_pods.go:61] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:50.069887  177409 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:50.069907  177409 system_pods.go:61] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:50.069919  177409 system_pods.go:61] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:50.069932  177409 system_pods.go:74] duration metric: took 18.020213ms to wait for pod list to return data ...
	I1213 00:09:50.069944  177409 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:50.073659  177409 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:50.073688  177409 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:50.073701  177409 node_conditions.go:105] duration metric: took 3.752016ms to run NodePressure ...
	I1213 00:09:50.073722  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:50.545413  177409 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:50.559389  177409 kubeadm.go:787] kubelet initialised
	I1213 00:09:50.559421  177409 kubeadm.go:788] duration metric: took 13.971205ms waiting for restarted kubelet to initialise ...
	I1213 00:09:50.559442  177409 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:50.568069  177409 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.580294  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.580327  177409 pod_ready.go:81] duration metric: took 12.225698ms waiting for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.580340  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.580348  177409 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.588859  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.588893  177409 pod_ready.go:81] duration metric: took 8.526992ms waiting for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.588909  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.588917  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.609726  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.609759  177409 pod_ready.go:81] duration metric: took 20.834011ms waiting for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.609773  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.609781  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.626724  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.626757  177409 pod_ready.go:81] duration metric: took 16.966751ms waiting for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.626770  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.626777  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.950893  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-proxy-zk4wl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.950927  177409 pod_ready.go:81] duration metric: took 324.143266ms waiting for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.950939  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-proxy-zk4wl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.950948  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:51.465200  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:51.465227  177409 pod_ready.go:81] duration metric: took 514.267219ms waiting for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:51.465242  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:51.465251  177409 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:52.111655  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:52.111690  177409 pod_ready.go:81] duration metric: took 646.423162ms waiting for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:52.111707  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:52.111716  177409 pod_ready.go:38] duration metric: took 1.552263211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:52.111735  177409 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:09:52.125125  177409 ops.go:34] apiserver oom_adj: -16
	I1213 00:09:52.125152  177409 kubeadm.go:640] restartCluster took 22.955643397s
	I1213 00:09:52.125175  177409 kubeadm.go:406] StartCluster complete in 23.016262726s
	I1213 00:09:52.125204  177409 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:52.125379  177409 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:09:52.128126  177409 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:52.226763  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:09:52.226947  177409 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:09:52.227030  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:09:52.227060  177409 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227071  177409 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227082  177409 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-743278"
	I1213 00:09:52.227088  177409 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-743278"
	W1213 00:09:52.227092  177409 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:09:52.227115  177409 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227154  177409 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-743278"
	I1213 00:09:52.227165  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	W1213 00:09:52.227173  177409 addons.go:240] addon metrics-server should already be in state true
	I1213 00:09:52.227252  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	I1213 00:09:52.227633  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227667  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.227633  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227698  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.227728  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227794  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.500974  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I1213 00:09:52.501503  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.502103  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.502130  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.502518  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.503096  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.503120  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I1213 00:09:52.503173  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.503249  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I1213 00:09:52.503460  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.503653  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.503952  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.503979  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.504117  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.504137  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.504326  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.504485  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.504680  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.504910  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.504957  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.508425  177409 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-743278"
	W1213 00:09:52.508466  177409 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:09:52.508495  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	I1213 00:09:52.508941  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.508989  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.520593  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35027
	I1213 00:09:52.521055  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37697
	I1213 00:09:52.521104  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.521443  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.521602  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.521630  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.521891  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.521917  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.521956  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.522162  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.522300  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.522506  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.523942  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:52.524208  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I1213 00:09:52.524419  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:52.612780  177409 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:09:52.524612  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.755661  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:09:52.941509  177409 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:52.941551  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:09:53.149407  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:52.881597  177409 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-743278" context rescaled to 1 replicas
	I1213 00:09:53.149472  177409 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:09:53.149496  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:09:52.884700  177409 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1213 00:09:52.756216  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:53.149523  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:53.149532  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:53.149484  177409 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:09:53.150147  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:53.153109  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.153288  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.360880  177409 out.go:177] * Verifying Kubernetes components...
	I1213 00:09:53.153717  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.153952  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.361036  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:50.301405  177307 pod_ready.go:102] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:52.803001  177307 pod_ready.go:102] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:53.361074  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:53.466451  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.361087  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.361322  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.466546  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:09:53.361364  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.361590  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:53.466661  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:53.466906  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.466963  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.467166  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.467266  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.489618  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41785
	I1213 00:09:53.490349  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:53.490932  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:53.490951  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:53.491365  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:53.491579  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:53.494223  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:53.495774  177409 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:09:53.495796  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:09:53.495816  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:53.499620  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.500099  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:53.500124  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.500405  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.500592  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.500734  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.501069  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.667878  177409 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-743278" to be "Ready" ...
	I1213 00:09:53.806167  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:09:53.806194  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:09:53.807837  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:09:53.808402  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:09:53.915171  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:09:53.915199  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:09:53.993146  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:09:53.993172  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:09:54.071008  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:09:50.865405  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.866538  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.867587  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1213 00:09:50.871289  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.872472  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.878541  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.882665  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:50.978405  176813 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1213 00:09:50.978458  176813 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.978527  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.038778  176813 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1213 00:09:51.038824  176813 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:51.038877  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.048868  176813 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1213 00:09:51.048925  176813 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1213 00:09:51.048983  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.054956  176813 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1213 00:09:51.055003  176813 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:51.055045  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.055045  176813 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1213 00:09:51.055133  176813 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:51.055162  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.069915  176813 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1213 00:09:51.069971  176813 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:51.070018  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.073904  176813 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1213 00:09:51.073955  176813 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:51.073990  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:51.074058  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:51.073997  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.074127  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1213 00:09:51.074173  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:51.074270  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1213 00:09:51.076866  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:51.216889  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:51.217032  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1213 00:09:51.217046  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1213 00:09:51.217118  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1213 00:09:51.217157  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1213 00:09:51.217213  176813 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.217804  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1213 00:09:51.217887  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1213 00:09:51.224310  176813 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1213 00:09:51.224329  176813 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.224373  176813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.270398  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1213 00:09:51.651719  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:53.599238  176813 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.374835203s)
	I1213 00:09:53.599269  176813 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1213 00:09:53.599323  176813 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.947557973s)
	I1213 00:09:53.599398  176813 cache_images.go:92] LoadImages completed in 2.869881827s
	W1213 00:09:53.599497  176813 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1213 00:09:53.599587  176813 ssh_runner.go:195] Run: crio config
	I1213 00:09:53.669735  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:09:53.669767  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:53.669792  176813 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:53.669814  176813 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-508612 NodeName:old-k8s-version-508612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1213 00:09:53.669991  176813 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-508612"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-508612
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:53.670076  176813 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-508612 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-508612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:09:53.670138  176813 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1213 00:09:53.680033  176813 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:53.680120  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:53.689595  176813 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1213 00:09:53.707167  176813 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:09:53.726978  176813 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1213 00:09:53.746191  176813 ssh_runner.go:195] Run: grep 192.168.39.70	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:53.750290  176813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:53.763369  176813 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612 for IP: 192.168.39.70
	I1213 00:09:53.763407  176813 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:53.763598  176813 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:53.763671  176813 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:53.763776  176813 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.key
	I1213 00:09:53.763855  176813 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.key.5467de6f
	I1213 00:09:53.763914  176813 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.key
	I1213 00:09:53.764055  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:53.764098  176813 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:53.764115  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:53.764158  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:53.764195  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:53.764238  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:53.764297  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:53.765315  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:53.793100  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:53.821187  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:53.847791  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:53.873741  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:53.903484  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:53.930420  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:53.958706  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:53.986236  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:54.011105  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:54.034546  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:54.070680  176813 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:54.093063  176813 ssh_runner.go:195] Run: openssl version
	I1213 00:09:54.100686  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:54.114647  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.121380  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.121463  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.128895  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:54.142335  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:54.155146  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.159746  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.159817  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.166153  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:54.176190  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:54.187049  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.191667  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.191737  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.197335  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:54.208790  176813 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:54.213230  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:54.219377  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:54.225539  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:54.232970  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:54.240720  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:54.247054  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:54.253486  176813 kubeadm.go:404] StartCluster: {Name:old-k8s-version-508612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-508612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:54.253600  176813 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:54.253674  176813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:54.303024  176813 cri.go:89] found id: ""
	I1213 00:09:54.303102  176813 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:54.317795  176813 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:54.317827  176813 kubeadm.go:636] restartCluster start
	I1213 00:09:54.317884  176813 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:54.331180  176813 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.332572  176813 kubeconfig.go:92] found "old-k8s-version-508612" server: "https://192.168.39.70:8443"
	I1213 00:09:54.335079  176813 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:54.346247  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.346292  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.362692  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.362720  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.362776  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.377570  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.878307  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.878384  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.891159  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:55.377679  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:55.377789  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:55.392860  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:52.229764  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:54.232636  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:55.162034  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.354143542s)
	I1213 00:09:55.162091  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.162103  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.162486  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.162503  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.162519  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.162536  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.162554  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.162887  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.162916  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.162961  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.255531  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.255561  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.255844  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.255867  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.686976  177409 node_ready.go:58] node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:55.814831  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.006392676s)
	I1213 00:09:55.814885  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.814905  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.815237  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.815300  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.815315  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.815325  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.815675  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.815693  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.815721  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.959447  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.88836869s)
	I1213 00:09:55.959502  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.959519  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.959909  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.959931  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.959941  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.959943  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.959950  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.960189  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.960205  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.960223  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.960226  177409 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-743278"
	I1213 00:09:55.962464  177409 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1213 00:09:54.302018  177307 pod_ready.go:92] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.302047  177307 pod_ready.go:81] duration metric: took 6.038549186s waiting for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.302061  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8k9x6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.308192  177307 pod_ready.go:92] pod "kube-proxy-8k9x6" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.308220  177307 pod_ready.go:81] duration metric: took 6.150452ms waiting for pod "kube-proxy-8k9x6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.308233  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.829614  177307 pod_ready.go:92] pod "kube-scheduler-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.829639  177307 pod_ready.go:81] duration metric: took 521.398817ms waiting for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.829649  177307 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:56.842731  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:55.964691  177409 addons.go:502] enable addons completed in 3.737755135s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1213 00:09:58.183398  177409 node_ready.go:58] node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:58.683603  177409 node_ready.go:49] node "default-k8s-diff-port-743278" has status "Ready":"True"
	I1213 00:09:58.683629  177409 node_ready.go:38] duration metric: took 5.01572337s waiting for node "default-k8s-diff-port-743278" to be "Ready" ...
	I1213 00:09:58.683638  177409 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:58.692636  177409 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:58.699084  177409 pod_ready.go:92] pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:58.699103  177409 pod_ready.go:81] duration metric: took 6.437856ms waiting for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:58.699111  177409 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:55.877904  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:55.877977  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:55.893729  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.377737  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:56.377803  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:56.389754  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.878464  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:56.878530  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:56.891849  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:57.377841  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:57.377929  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:57.389962  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:57.878384  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:57.878464  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:57.892518  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:58.378033  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:58.378119  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:58.391780  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:58.878309  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:58.878397  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:58.890677  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:59.378117  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:59.378239  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:59.390695  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:59.878240  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:59.878318  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:59.889688  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:00.378278  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:00.378376  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:00.390756  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.727591  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:58.729633  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:59.343431  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:01.344195  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:03.842943  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:00.718294  177409 pod_ready.go:102] pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:01.216472  177409 pod_ready.go:92] pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.216499  177409 pod_ready.go:81] duration metric: took 2.517381433s waiting for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.216513  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.221993  177409 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.222016  177409 pod_ready.go:81] duration metric: took 5.495703ms waiting for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.222026  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.227513  177409 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.227543  177409 pod_ready.go:81] duration metric: took 5.506889ms waiting for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.227555  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.485096  177409 pod_ready.go:92] pod "kube-proxy-zk4wl" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.485120  177409 pod_ready.go:81] duration metric: took 257.55839ms waiting for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.485131  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.886812  177409 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.886843  177409 pod_ready.go:81] duration metric: took 401.704296ms waiting for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.886860  177409 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:04.192658  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:00.878385  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:00.878514  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:00.891279  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:01.378010  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:01.378120  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:01.389897  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:01.878496  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:01.878581  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:01.890674  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:02.377657  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:02.377767  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:02.389165  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:02.877744  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:02.877886  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:02.889536  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:03.378083  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:03.378206  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:03.390009  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:03.878637  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:03.878757  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:03.891565  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:04.347244  176813 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:10:04.347324  176813 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:10:04.347339  176813 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:10:04.347431  176813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:10:04.391480  176813 cri.go:89] found id: ""
	I1213 00:10:04.391558  176813 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:10:04.407659  176813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:10:04.416545  176813 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:10:04.416616  176813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:10:04.425366  176813 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:10:04.425393  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:04.553907  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:05.643662  176813 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.089700044s)
	I1213 00:10:05.643704  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:01.230857  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:03.728598  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.729292  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.843723  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:07.844549  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:06.193695  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:08.194425  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.881077  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:05.983444  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:06.106543  176813 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:10:06.106637  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:06.120910  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:06.637294  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.137087  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.636989  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.659899  176813 api_server.go:72] duration metric: took 1.5533541s to wait for apiserver process to appear ...
	I1213 00:10:07.659925  176813 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:10:07.659949  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:08.229410  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.729881  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.344919  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.842700  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.692378  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.693810  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.660316  176813 api_server.go:269] stopped: https://192.168.39.70:8443/healthz: Get "https://192.168.39.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 00:10:12.660365  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:13.933418  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:10:13.933452  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:10:14.434114  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:14.442223  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1213 00:10:14.442261  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1213 00:10:14.934425  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:14.941188  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1213 00:10:14.941232  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1213 00:10:15.433614  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:15.441583  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1213 00:10:15.449631  176813 api_server.go:141] control plane version: v1.16.0
	I1213 00:10:15.449656  176813 api_server.go:131] duration metric: took 7.789725712s to wait for apiserver health ...
	I1213 00:10:15.449671  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:10:15.449677  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:10:15.451328  176813 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:10:15.452690  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:10:15.463558  176813 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:10:15.482997  176813 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:10:15.493646  176813 system_pods.go:59] 7 kube-system pods found
	I1213 00:10:15.493674  176813 system_pods.go:61] "coredns-5644d7b6d9-jnhmk" [38a0c948-a47e-4566-ad47-376943787ca1] Running
	I1213 00:10:15.493679  176813 system_pods.go:61] "etcd-old-k8s-version-508612" [80e685b2-cd70-4b7d-b00c-feda3ab1a509] Running
	I1213 00:10:15.493683  176813 system_pods.go:61] "kube-apiserver-old-k8s-version-508612" [657f1d7b-4fcb-44d4-96d3-3cc659fb9f0a] Running
	I1213 00:10:15.493688  176813 system_pods.go:61] "kube-controller-manager-old-k8s-version-508612" [d84a0927-7d19-4bba-8afd-b32877a9aee3] Running
	I1213 00:10:15.493692  176813 system_pods.go:61] "kube-proxy-fpd4j" [f2e9e528-576f-4339-b208-09ee5dbe7fcb] Running
	I1213 00:10:15.493696  176813 system_pods.go:61] "kube-scheduler-old-k8s-version-508612" [ce5ff03a-23bf-4cce-8795-58e412fc841c] Running
	I1213 00:10:15.493699  176813 system_pods.go:61] "storage-provisioner" [98a03a45-0cd3-40b4-9e66-6df14db5a848] Running
	I1213 00:10:15.493706  176813 system_pods.go:74] duration metric: took 10.683423ms to wait for pod list to return data ...
	I1213 00:10:15.493715  176813 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:10:15.498679  176813 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:10:15.498726  176813 node_conditions.go:123] node cpu capacity is 2
	I1213 00:10:15.498742  176813 node_conditions.go:105] duration metric: took 5.021318ms to run NodePressure ...
	I1213 00:10:15.498767  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:15.762302  176813 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:10:15.766665  176813 retry.go:31] will retry after 288.591747ms: kubelet not initialised
	I1213 00:10:13.228878  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.728396  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.343194  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:17.344384  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.193995  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:17.693024  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:19.693723  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:16.063637  176813 retry.go:31] will retry after 250.40677ms: kubelet not initialised
	I1213 00:10:16.320362  176813 retry.go:31] will retry after 283.670967ms: kubelet not initialised
	I1213 00:10:16.610834  176813 retry.go:31] will retry after 810.845397ms: kubelet not initialised
	I1213 00:10:17.427101  176813 retry.go:31] will retry after 1.00058932s: kubelet not initialised
	I1213 00:10:18.498625  176813 retry.go:31] will retry after 2.616819597s: kubelet not initialised
	I1213 00:10:18.226990  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:20.228211  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:19.345330  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:21.843959  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:22.192449  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.193001  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:21.120283  176813 retry.go:31] will retry after 1.883694522s: kubelet not initialised
	I1213 00:10:23.009312  176813 retry.go:31] will retry after 2.899361823s: kubelet not initialised
	I1213 00:10:22.727450  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.729952  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.342673  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:26.343639  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:28.842489  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:26.696279  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:29.194453  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:25.914801  176813 retry.go:31] will retry after 8.466541404s: kubelet not initialised
	I1213 00:10:27.227947  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:29.229430  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:30.843429  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:32.844457  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:31.692122  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:33.694437  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:34.391931  176813 retry.go:31] will retry after 6.686889894s: kubelet not initialised
	I1213 00:10:31.729052  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:33.730399  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:35.344029  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:37.842694  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:36.193427  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:38.193688  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:36.226978  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:38.227307  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.227797  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.343702  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:42.841574  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.693443  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:42.693668  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:41.084957  176813 retry.go:31] will retry after 18.68453817s: kubelet not initialised
	I1213 00:10:42.229433  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:44.728322  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:44.843586  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:46.844269  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:45.192582  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:47.691806  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.692545  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:47.227469  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.228908  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.343743  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.843948  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.694308  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.192629  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.728175  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.226904  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.342077  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:56.343115  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.345031  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:56.193137  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.693873  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:59.777116  176813 kubeadm.go:787] kubelet initialised
	I1213 00:10:59.777150  176813 kubeadm.go:788] duration metric: took 44.014819539s waiting for restarted kubelet to initialise ...
	I1213 00:10:59.777162  176813 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:10:59.782802  176813 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.788307  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.788348  176813 pod_ready.go:81] duration metric: took 5.514049ms waiting for pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.788356  176813 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.792569  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.792588  176813 pod_ready.go:81] duration metric: took 4.224934ms waiting for pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.792599  176813 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.797096  176813 pod_ready.go:92] pod "etcd-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.797119  176813 pod_ready.go:81] duration metric: took 4.508662ms waiting for pod "etcd-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.797130  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.801790  176813 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.801811  176813 pod_ready.go:81] duration metric: took 4.673597ms waiting for pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.801818  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.175474  176813 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.175504  176813 pod_ready.go:81] duration metric: took 373.677737ms waiting for pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.175523  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fpd4j" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.576344  176813 pod_ready.go:92] pod "kube-proxy-fpd4j" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.576373  176813 pod_ready.go:81] duration metric: took 400.842191ms waiting for pod "kube-proxy-fpd4j" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.576387  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:56.229570  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.728770  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:00.843201  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.343182  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:01.199677  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.201427  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:00.976886  176813 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.976908  176813 pod_ready.go:81] duration metric: took 400.512629ms waiting for pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.976920  176813 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:03.283224  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.284030  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:01.229393  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.728570  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.843264  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.343228  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.694505  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.197100  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:07.285680  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:09.786591  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:06.227705  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.229577  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.727791  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.343300  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.843162  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.695161  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:13.195051  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.285865  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:14.785354  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.728656  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:15.227890  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:14.844312  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:16.847144  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:15.692597  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:18.193383  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:17.284986  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.786139  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:17.229608  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.728503  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.344056  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:21.843070  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:23.844051  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:20.692417  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.692912  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.693204  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.285292  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.784342  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.227286  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.228831  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.342758  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:28.347392  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.693376  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:28.696971  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:27.284643  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:29.284776  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.727796  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:29.227690  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:30.843482  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:32.844695  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.191962  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.192585  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.285494  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.285863  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.791234  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.727767  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.728047  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.342092  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:37.342356  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.196354  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:37.693679  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:38.285349  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.785094  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:36.228379  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:38.728361  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.728752  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:39.342944  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:41.343229  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:43.842669  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.192636  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:42.696348  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:43.284960  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.783972  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:42.730357  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.228371  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.844034  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:48.345622  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.199304  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.692399  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.692916  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.784062  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.784533  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.232607  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.727709  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:50.842207  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:52.845393  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:52.193829  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:54.694220  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:51.784671  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:54.284709  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:51.728053  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:53.729081  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:55.342783  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:57.343274  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.694508  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:59.194904  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.285342  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:58.783460  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.227395  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:58.231694  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:00.727822  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:59.343618  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.842326  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.842653  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.197290  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.694223  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.285393  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.784968  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.786110  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:02.728596  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.227456  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.843038  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.342838  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.695124  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.192630  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.284460  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.284768  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:07.728787  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.227036  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.344532  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.841921  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.193483  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.196550  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.693706  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.784036  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.784471  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.227952  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.228178  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.842965  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:17.343683  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:17.193131  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.692561  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:16.785596  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.285058  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:16.726702  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:18.728269  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.843031  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:22.343417  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:22.191869  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:24.193973  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:21.783890  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:23.784341  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:25.784521  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:21.227269  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:23.227691  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:25.228239  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:24.343805  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:26.346354  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:28.844254  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:26.693293  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:29.193583  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:27.784904  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:30.285014  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:27.727045  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:29.728691  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:31.346007  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:33.843421  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:31.194160  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:33.691639  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:32.784701  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:35.284958  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:32.226511  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:34.228892  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:36.342384  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.343546  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:35.694257  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.191620  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:37.286143  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:39.783802  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:36.727306  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.728168  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:40.850557  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.342393  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:40.192328  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:42.192749  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:44.693406  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:41.784411  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.789293  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:41.228591  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.728133  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:45.842401  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:47.843839  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:47.193847  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:49.692840  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:46.284387  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:48.284692  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:50.285419  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:46.228594  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:48.728575  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:50.343073  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:52.843034  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:51.692895  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.196344  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:52.785093  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.785238  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:51.226704  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:53.228359  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:55.228418  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.847060  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.345339  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:56.693854  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.191098  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.285101  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.783955  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.727063  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.727437  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.847179  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:02.343433  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.192388  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.693056  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.784055  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.784840  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.727635  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.727705  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:04.346684  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.843294  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.192928  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.693240  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.284092  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.784303  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:10.784971  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.228019  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.727726  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:09.342622  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:11.343211  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.843894  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:10.698298  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.191387  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.285854  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:15.790625  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:11.228300  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.730143  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:16.343574  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:18.343896  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:15.195797  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:17.694620  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:18.283712  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:20.284937  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:16.227280  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:17.419163  177122 pod_ready.go:81] duration metric: took 4m0.000090271s waiting for pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace to be "Ready" ...
	E1213 00:13:17.419207  177122 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:13:17.419233  177122 pod_ready.go:38] duration metric: took 4m12.64031929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:17.419260  177122 kubeadm.go:640] restartCluster took 4m32.91279931s
	W1213 00:13:17.419346  177122 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:13:17.419387  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:13:20.847802  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:23.342501  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:20.193039  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:22.693730  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:22.285212  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:24.783901  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:25.343029  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:27.842840  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:25.194640  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:27.692515  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:29.695543  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:26.785503  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:29.284618  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:33.603614  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.184189808s)
	I1213 00:13:33.603692  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:13:33.617573  177122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:13:33.626779  177122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:13:33.636160  177122 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:13:33.636214  177122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 00:13:33.694141  177122 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1213 00:13:33.694267  177122 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:13:33.853582  177122 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:13:33.853718  177122 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:13:33.853992  177122 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:13:34.092007  177122 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:13:29.844324  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:32.345926  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.093975  177122 out.go:204]   - Generating certificates and keys ...
	I1213 00:13:34.094125  177122 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:13:34.094198  177122 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:13:34.094297  177122 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:13:34.094492  177122 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:13:34.095287  177122 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:13:34.096041  177122 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:13:34.096841  177122 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:13:34.097551  177122 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:13:34.098399  177122 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:13:34.099122  177122 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:13:34.099844  177122 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:13:34.099929  177122 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:13:34.191305  177122 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:13:34.425778  177122 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:13:34.601958  177122 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:13:34.747536  177122 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:13:34.748230  177122 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:13:34.750840  177122 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:13:32.193239  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.691928  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:31.286291  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:33.786852  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.752409  177122 out.go:204]   - Booting up control plane ...
	I1213 00:13:34.752562  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:13:34.752659  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:13:34.752994  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:13:34.772157  177122 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:13:34.774789  177122 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:13:34.774854  177122 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1213 00:13:34.926546  177122 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:13:34.346782  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.847723  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.694243  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:39.195903  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.284979  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:38.285685  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:40.286174  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:39.345989  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:41.353093  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:43.847024  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:43.435528  177122 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506764 seconds
	I1213 00:13:43.435691  177122 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:13:43.454840  177122 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:13:43.997250  177122 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:13:43.997537  177122 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-335807 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 00:13:44.513097  177122 kubeadm.go:322] [bootstrap-token] Using token: a9yhsz.n5p4z1j5jkbj68ov
	I1213 00:13:44.514695  177122 out.go:204]   - Configuring RBAC rules ...
	I1213 00:13:44.514836  177122 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:13:44.520134  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 00:13:44.528726  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:13:44.535029  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:13:44.539162  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:13:44.545990  177122 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:13:44.561964  177122 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 00:13:44.831402  177122 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:13:44.927500  177122 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:13:44.931294  177122 kubeadm.go:322] 
	I1213 00:13:44.931371  177122 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:13:44.931389  177122 kubeadm.go:322] 
	I1213 00:13:44.931500  177122 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:13:44.931509  177122 kubeadm.go:322] 
	I1213 00:13:44.931535  177122 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:13:44.931605  177122 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:13:44.931674  177122 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:13:44.931681  177122 kubeadm.go:322] 
	I1213 00:13:44.931743  177122 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1213 00:13:44.931752  177122 kubeadm.go:322] 
	I1213 00:13:44.931838  177122 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 00:13:44.931861  177122 kubeadm.go:322] 
	I1213 00:13:44.931938  177122 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:13:44.932026  177122 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:13:44.932139  177122 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:13:44.932151  177122 kubeadm.go:322] 
	I1213 00:13:44.932260  177122 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 00:13:44.932367  177122 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:13:44.932386  177122 kubeadm.go:322] 
	I1213 00:13:44.932533  177122 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token a9yhsz.n5p4z1j5jkbj68ov \
	I1213 00:13:44.932702  177122 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:13:44.932726  177122 kubeadm.go:322] 	--control-plane 
	I1213 00:13:44.932730  177122 kubeadm.go:322] 
	I1213 00:13:44.932797  177122 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:13:44.932808  177122 kubeadm.go:322] 
	I1213 00:13:44.932927  177122 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token a9yhsz.n5p4z1j5jkbj68ov \
	I1213 00:13:44.933074  177122 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:13:44.933953  177122 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:13:44.934004  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:13:44.934026  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:13:44.935893  177122 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:13:41.694337  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.192303  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:42.783865  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.784599  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.937355  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:13:44.961248  177122 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:13:45.005684  177122 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:13:45.005758  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.005789  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=embed-certs-335807 minikube.k8s.io/updated_at=2023_12_13T00_13_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.117205  177122 ops.go:34] apiserver oom_adj: -16
	I1213 00:13:45.402961  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.532503  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:46.343927  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:48.843509  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.197988  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:48.691611  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.785080  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:49.283316  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.138647  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:46.639104  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:47.139139  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:47.638244  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:48.138634  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:48.638352  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:49.138616  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:49.639061  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:50.138633  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:50.639013  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:51.343525  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:53.345044  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:50.693254  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:52.693448  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:51.286352  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:53.782966  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:55.786792  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:51.138430  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:51.638340  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:52.138696  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:52.638727  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:53.138509  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:53.639092  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:54.138153  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:54.638781  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:55.138875  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:55.639166  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:56.138534  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:56.638726  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:57.138427  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:57.273101  177122 kubeadm.go:1088] duration metric: took 12.26741009s to wait for elevateKubeSystemPrivileges.
	I1213 00:13:57.273139  177122 kubeadm.go:406] StartCluster complete in 5m12.825293837s
	I1213 00:13:57.273163  177122 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:13:57.273294  177122 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:13:57.275845  177122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:13:57.276142  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:13:57.276488  177122 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:13:57.276665  177122 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:13:57.276739  177122 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-335807"
	I1213 00:13:57.276756  177122 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-335807"
	W1213 00:13:57.276765  177122 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:13:57.276812  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.277245  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277283  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.277356  177122 addons.go:69] Setting default-storageclass=true in profile "embed-certs-335807"
	I1213 00:13:57.277374  177122 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-335807"
	I1213 00:13:57.277528  177122 addons.go:69] Setting metrics-server=true in profile "embed-certs-335807"
	I1213 00:13:57.277545  177122 addons.go:231] Setting addon metrics-server=true in "embed-certs-335807"
	W1213 00:13:57.277552  177122 addons.go:240] addon metrics-server should already be in state true
	I1213 00:13:57.277599  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.277791  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277820  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.277923  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277945  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.296571  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I1213 00:13:57.299879  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I1213 00:13:57.299897  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I1213 00:13:57.300251  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.300833  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.300906  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.300923  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.300935  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.301294  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.301309  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.301330  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.301419  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.301427  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.301497  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.301728  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.301774  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.302199  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.302232  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.303181  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.303222  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.304586  177122 addons.go:231] Setting addon default-storageclass=true in "embed-certs-335807"
	W1213 00:13:57.304601  177122 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:13:57.304620  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.304860  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.304891  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.323403  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I1213 00:13:57.324103  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.324810  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I1213 00:13:57.324961  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.324985  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.325197  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.325332  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.325518  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.325910  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.325935  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.326524  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.326731  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.328013  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.329895  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.332188  177122 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:13:57.333332  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1213 00:13:57.333375  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:13:57.334952  177122 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:13:57.333392  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:13:57.333795  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.337096  177122 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:13:57.337110  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:13:57.337124  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.337162  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.337564  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.337585  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.339793  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.340514  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.340572  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.340821  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.341606  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.341657  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.341829  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.342023  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.342206  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.342411  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.347105  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.347512  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.347538  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.347782  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.347974  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.348108  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.348213  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.359690  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1213 00:13:57.360385  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.361065  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.361093  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.361567  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.361777  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.363693  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.364020  177122 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:13:57.364037  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:13:57.364056  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.367409  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.367874  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.367904  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.368086  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.368287  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.368470  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.368619  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.399353  177122 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-335807" context rescaled to 1 replicas
	I1213 00:13:57.399391  177122 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.249 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:13:57.401371  177122 out.go:177] * Verifying Kubernetes components...
	I1213 00:13:54.829811  177307 pod_ready.go:81] duration metric: took 4m0.000140793s waiting for pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace to be "Ready" ...
	E1213 00:13:54.829844  177307 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:13:54.829878  177307 pod_ready.go:38] duration metric: took 4m13.138964255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:54.829912  177307 kubeadm.go:640] restartCluster took 4m33.090839538s
	W1213 00:13:54.829977  177307 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:13:54.830014  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:13:55.192745  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:57.193249  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:59.196279  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:57.403699  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:13:57.551632  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:13:57.551656  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:13:57.590132  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:13:57.617477  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:13:57.648290  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:13:57.648324  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:13:57.724394  177122 node_ready.go:35] waiting up to 6m0s for node "embed-certs-335807" to be "Ready" ...
	I1213 00:13:57.724498  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:13:57.751666  177122 node_ready.go:49] node "embed-certs-335807" has status "Ready":"True"
	I1213 00:13:57.751704  177122 node_ready.go:38] duration metric: took 27.274531ms waiting for node "embed-certs-335807" to be "Ready" ...
	I1213 00:13:57.751718  177122 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:57.764283  177122 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace to be "Ready" ...
	I1213 00:13:57.835941  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:13:57.835968  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:13:58.040994  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:13:59.867561  177122 pod_ready.go:102] pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.210713  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.620538044s)
	I1213 00:14:00.210745  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.593229432s)
	I1213 00:14:00.210763  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210775  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210805  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210846  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210892  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.169863052s)
	I1213 00:14:00.210932  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210951  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210803  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.48627637s)
	I1213 00:14:00.211241  177122 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1213 00:14:00.211428  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.211467  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.211477  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.211486  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.211496  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.211804  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.211843  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.211851  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.211860  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.211869  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.211979  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.212025  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.212033  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.212251  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.213205  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213214  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.213221  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213253  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213269  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213287  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.213300  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.213565  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213592  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213600  177122 addons.go:467] Verifying addon metrics-server=true in "embed-certs-335807"
	I1213 00:14:00.213633  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.231892  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.231921  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.232238  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.232257  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.234089  177122 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1213 00:13:58.285584  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.286469  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.235676  177122 addons.go:502] enable addons completed in 2.959016059s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1213 00:14:01.848071  177122 pod_ready.go:92] pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.848093  177122 pod_ready.go:81] duration metric: took 4.083780035s waiting for pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.848101  177122 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.854062  177122 pod_ready.go:92] pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.854082  177122 pod_ready.go:81] duration metric: took 5.975194ms waiting for pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.854090  177122 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.864033  177122 pod_ready.go:92] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.864060  177122 pod_ready.go:81] duration metric: took 9.963384ms waiting for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.864072  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.875960  177122 pod_ready.go:92] pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.875990  177122 pod_ready.go:81] duration metric: took 11.909604ms waiting for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.876004  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.882084  177122 pod_ready.go:92] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.882107  177122 pod_ready.go:81] duration metric: took 6.092978ms waiting for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.882118  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ccq47" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:02.645363  177122 pod_ready.go:92] pod "kube-proxy-ccq47" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:02.645389  177122 pod_ready.go:81] duration metric: took 763.264171ms waiting for pod "kube-proxy-ccq47" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:02.645399  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:03.045476  177122 pod_ready.go:92] pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:03.045502  177122 pod_ready.go:81] duration metric: took 400.097321ms waiting for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:03.045513  177122 pod_ready.go:38] duration metric: took 5.293782674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:03.045530  177122 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:03.045584  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:03.062802  177122 api_server.go:72] duration metric: took 5.663381439s to wait for apiserver process to appear ...
	I1213 00:14:03.062827  177122 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:03.062848  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:14:03.068482  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 200:
	ok
	I1213 00:14:03.069909  177122 api_server.go:141] control plane version: v1.28.4
	I1213 00:14:03.069934  177122 api_server.go:131] duration metric: took 7.099309ms to wait for apiserver health ...
	I1213 00:14:03.069943  177122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:03.248993  177122 system_pods.go:59] 9 kube-system pods found
	I1213 00:14:03.249025  177122 system_pods.go:61] "coredns-5dd5756b68-gs4kb" [d4b86e83-a0a1-4bf8-958e-e154e91f47ef] Running
	I1213 00:14:03.249032  177122 system_pods.go:61] "coredns-5dd5756b68-t92hd" [1ad2dcb3-bcda-42af-b4ce-0d95bba0315f] Running
	I1213 00:14:03.249039  177122 system_pods.go:61] "etcd-embed-certs-335807" [aa5222a7-5670-4550-9d65-6db2095898be] Running
	I1213 00:14:03.249045  177122 system_pods.go:61] "kube-apiserver-embed-certs-335807" [ca0e9de9-8f6a-4bae-b1d1-04b7c0c3cd4c] Running
	I1213 00:14:03.249052  177122 system_pods.go:61] "kube-controller-manager-embed-certs-335807" [f5563afe-3d6c-4b44-b0c0-765da451fd88] Running
	I1213 00:14:03.249057  177122 system_pods.go:61] "kube-proxy-ccq47" [68f3c55f-175e-40af-a769-65c859d5012d] Running
	I1213 00:14:03.249063  177122 system_pods.go:61] "kube-scheduler-embed-certs-335807" [c989cf08-80e9-4b0f-b0e4-f840c6259ace] Running
	I1213 00:14:03.249074  177122 system_pods.go:61] "metrics-server-57f55c9bc5-z7qb4" [b33959c3-63b7-4a81-adda-6d2971036e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:03.249082  177122 system_pods.go:61] "storage-provisioner" [816660d7-a041-4695-b7da-d977b8891935] Running
	I1213 00:14:03.249095  177122 system_pods.go:74] duration metric: took 179.144496ms to wait for pod list to return data ...
	I1213 00:14:03.249106  177122 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:03.444557  177122 default_sa.go:45] found service account: "default"
	I1213 00:14:03.444591  177122 default_sa.go:55] duration metric: took 195.469108ms for default service account to be created ...
	I1213 00:14:03.444603  177122 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:03.651685  177122 system_pods.go:86] 9 kube-system pods found
	I1213 00:14:03.651714  177122 system_pods.go:89] "coredns-5dd5756b68-gs4kb" [d4b86e83-a0a1-4bf8-958e-e154e91f47ef] Running
	I1213 00:14:03.651719  177122 system_pods.go:89] "coredns-5dd5756b68-t92hd" [1ad2dcb3-bcda-42af-b4ce-0d95bba0315f] Running
	I1213 00:14:03.651723  177122 system_pods.go:89] "etcd-embed-certs-335807" [aa5222a7-5670-4550-9d65-6db2095898be] Running
	I1213 00:14:03.651727  177122 system_pods.go:89] "kube-apiserver-embed-certs-335807" [ca0e9de9-8f6a-4bae-b1d1-04b7c0c3cd4c] Running
	I1213 00:14:03.651731  177122 system_pods.go:89] "kube-controller-manager-embed-certs-335807" [f5563afe-3d6c-4b44-b0c0-765da451fd88] Running
	I1213 00:14:03.651735  177122 system_pods.go:89] "kube-proxy-ccq47" [68f3c55f-175e-40af-a769-65c859d5012d] Running
	I1213 00:14:03.651739  177122 system_pods.go:89] "kube-scheduler-embed-certs-335807" [c989cf08-80e9-4b0f-b0e4-f840c6259ace] Running
	I1213 00:14:03.651745  177122 system_pods.go:89] "metrics-server-57f55c9bc5-z7qb4" [b33959c3-63b7-4a81-adda-6d2971036e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:03.651750  177122 system_pods.go:89] "storage-provisioner" [816660d7-a041-4695-b7da-d977b8891935] Running
	I1213 00:14:03.651758  177122 system_pods.go:126] duration metric: took 207.148805ms to wait for k8s-apps to be running ...
	I1213 00:14:03.651764  177122 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:03.651814  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:03.666068  177122 system_svc.go:56] duration metric: took 14.292973ms WaitForService to wait for kubelet.
	I1213 00:14:03.666093  177122 kubeadm.go:581] duration metric: took 6.266680553s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:03.666109  177122 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:03.845399  177122 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:03.845431  177122 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:03.845447  177122 node_conditions.go:105] duration metric: took 179.332019ms to run NodePressure ...
	I1213 00:14:03.845462  177122 start.go:228] waiting for startup goroutines ...
	I1213 00:14:03.845470  177122 start.go:233] waiting for cluster config update ...
	I1213 00:14:03.845482  177122 start.go:242] writing updated cluster config ...
	I1213 00:14:03.845850  177122 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:03.898374  177122 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1213 00:14:03.900465  177122 out.go:177] * Done! kubectl is now configured to use "embed-certs-335807" cluster and "default" namespace by default
	I1213 00:14:01.693061  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:01.886947  177409 pod_ready.go:81] duration metric: took 4m0.000066225s waiting for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	E1213 00:14:01.886997  177409 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:14:01.887010  177409 pod_ready.go:38] duration metric: took 4m3.203360525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:01.887056  177409 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:01.887093  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:01.887156  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:01.956004  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:01.956029  177409 cri.go:89] found id: ""
	I1213 00:14:01.956038  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:01.956096  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:01.961314  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:01.961388  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:02.001797  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:02.001825  177409 cri.go:89] found id: ""
	I1213 00:14:02.001835  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:02.001881  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.007127  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:02.007193  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:02.050259  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:02.050283  177409 cri.go:89] found id: ""
	I1213 00:14:02.050294  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:02.050347  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.056086  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:02.056147  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:02.125159  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:02.125189  177409 cri.go:89] found id: ""
	I1213 00:14:02.125199  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:02.125261  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.129874  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:02.129939  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:02.175027  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:02.175058  177409 cri.go:89] found id: ""
	I1213 00:14:02.175067  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:02.175127  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.180444  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:02.180515  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:02.219578  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:02.219603  177409 cri.go:89] found id: ""
	I1213 00:14:02.219610  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:02.219664  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.223644  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:02.223693  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:02.260542  177409 cri.go:89] found id: ""
	I1213 00:14:02.260567  177409 logs.go:284] 0 containers: []
	W1213 00:14:02.260575  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:02.260583  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:02.260656  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:02.304058  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:02.304082  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:02.304090  177409 cri.go:89] found id: ""
	I1213 00:14:02.304100  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:02.304159  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.308606  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.312421  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:02.312473  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:02.356415  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:02.356460  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:02.405870  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:02.405902  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:02.876461  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:02.876508  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:03.037302  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:03.037334  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:03.098244  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:03.098273  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:03.163681  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:03.163712  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:03.216883  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:03.216912  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:03.267979  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:03.268011  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:03.309364  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:03.309394  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:03.352427  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:03.352479  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:03.406508  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:03.406547  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:03.449959  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:03.449985  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:02.784516  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:05.284536  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.408895  177307 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.578851358s)
	I1213 00:14:09.408954  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:09.422044  177307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:14:09.430579  177307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:14:09.438689  177307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:14:09.438727  177307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 00:14:09.493519  177307 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1213 00:14:09.493657  177307 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:14:09.648151  177307 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:14:09.648294  177307 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:14:09.648489  177307 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:14:09.908199  177307 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:14:05.974125  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:05.992335  177409 api_server.go:72] duration metric: took 4m12.842684139s to wait for apiserver process to appear ...
	I1213 00:14:05.992364  177409 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:05.992411  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:05.992491  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:06.037770  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:06.037796  177409 cri.go:89] found id: ""
	I1213 00:14:06.037805  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:06.037863  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.042949  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:06.043016  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:06.090863  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:06.090888  177409 cri.go:89] found id: ""
	I1213 00:14:06.090897  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:06.090951  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.103859  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:06.103925  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:06.156957  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:06.156982  177409 cri.go:89] found id: ""
	I1213 00:14:06.156992  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:06.157053  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.162170  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:06.162220  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:06.204839  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:06.204867  177409 cri.go:89] found id: ""
	I1213 00:14:06.204877  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:06.204942  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.210221  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:06.210287  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:06.255881  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:06.255909  177409 cri.go:89] found id: ""
	I1213 00:14:06.255918  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:06.255984  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.260853  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:06.260924  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:06.308377  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:06.308400  177409 cri.go:89] found id: ""
	I1213 00:14:06.308413  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:06.308493  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.315028  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:06.315111  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:06.365453  177409 cri.go:89] found id: ""
	I1213 00:14:06.365484  177409 logs.go:284] 0 containers: []
	W1213 00:14:06.365494  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:06.365507  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:06.365568  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:06.423520  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:06.423545  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:06.423560  177409 cri.go:89] found id: ""
	I1213 00:14:06.423571  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:06.423628  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.429613  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.434283  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:06.434310  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:06.571329  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:06.571375  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:06.613274  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:06.613307  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:06.673407  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:06.673455  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:06.688886  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:06.688933  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:06.733130  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:06.733162  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:06.780131  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:06.780161  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:06.827465  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:06.827500  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:06.880245  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:06.880286  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:06.919735  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:06.919764  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:06.974039  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:06.974074  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:07.400452  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:07.400491  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:07.456759  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:07.456789  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.010686  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:14:10.017803  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I1213 00:14:10.019196  177409 api_server.go:141] control plane version: v1.28.4
	I1213 00:14:10.019216  177409 api_server.go:131] duration metric: took 4.026844615s to wait for apiserver health ...
	I1213 00:14:10.019225  177409 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:10.019251  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:10.019303  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:07.784301  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.785226  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.910151  177307 out.go:204]   - Generating certificates and keys ...
	I1213 00:14:09.910259  177307 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:14:09.910339  177307 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:14:09.910444  177307 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:14:09.910527  177307 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:14:09.910616  177307 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:14:09.910662  177307 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:14:09.910713  177307 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:14:09.910791  177307 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:14:09.910892  177307 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:14:09.911041  177307 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:14:09.911107  177307 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:14:09.911186  177307 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:14:10.262533  177307 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:14:10.508123  177307 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 00:14:10.766822  177307 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:14:10.866565  177307 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:14:11.206659  177307 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:14:11.207238  177307 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:14:11.210018  177307 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:14:10.061672  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.061699  177409 cri.go:89] found id: ""
	I1213 00:14:10.061708  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:10.061769  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.066426  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:10.066491  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:10.107949  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:10.107978  177409 cri.go:89] found id: ""
	I1213 00:14:10.107994  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:10.108053  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.112321  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:10.112393  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:10.169082  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:10.169110  177409 cri.go:89] found id: ""
	I1213 00:14:10.169120  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:10.169175  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.174172  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:10.174225  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:10.220290  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:10.220313  177409 cri.go:89] found id: ""
	I1213 00:14:10.220326  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:10.220384  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.225241  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:10.225310  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:10.271312  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:10.271336  177409 cri.go:89] found id: ""
	I1213 00:14:10.271345  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:10.271401  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.275974  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:10.276049  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:10.324262  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:10.324288  177409 cri.go:89] found id: ""
	I1213 00:14:10.324299  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:10.324360  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.329065  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:10.329130  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:10.375611  177409 cri.go:89] found id: ""
	I1213 00:14:10.375640  177409 logs.go:284] 0 containers: []
	W1213 00:14:10.375648  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:10.375654  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:10.375725  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:10.420778  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:10.420807  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:10.420812  177409 cri.go:89] found id: ""
	I1213 00:14:10.420819  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:10.420866  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.425676  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.430150  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:10.430180  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:10.486314  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:10.486351  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:10.500915  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:10.500946  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:10.543073  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:10.543108  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:10.584779  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:10.584814  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:10.629824  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:10.629852  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:10.756816  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:10.756857  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.807506  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:10.807536  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:10.849398  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:10.849436  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:10.911470  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:10.911508  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:11.288892  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:11.288941  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:11.361299  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:11.361347  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:11.407800  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:11.407850  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:13.965440  177409 system_pods.go:59] 8 kube-system pods found
	I1213 00:14:13.965477  177409 system_pods.go:61] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running
	I1213 00:14:13.965485  177409 system_pods.go:61] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running
	I1213 00:14:13.965493  177409 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running
	I1213 00:14:13.965500  177409 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running
	I1213 00:14:13.965505  177409 system_pods.go:61] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running
	I1213 00:14:13.965509  177409 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running
	I1213 00:14:13.965518  177409 system_pods.go:61] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:13.965528  177409 system_pods.go:61] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running
	I1213 00:14:13.965538  177409 system_pods.go:74] duration metric: took 3.946305195s to wait for pod list to return data ...
	I1213 00:14:13.965548  177409 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:13.969074  177409 default_sa.go:45] found service account: "default"
	I1213 00:14:13.969103  177409 default_sa.go:55] duration metric: took 3.543208ms for default service account to be created ...
	I1213 00:14:13.969114  177409 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:13.977167  177409 system_pods.go:86] 8 kube-system pods found
	I1213 00:14:13.977201  177409 system_pods.go:89] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running
	I1213 00:14:13.977211  177409 system_pods.go:89] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running
	I1213 00:14:13.977219  177409 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running
	I1213 00:14:13.977226  177409 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running
	I1213 00:14:13.977232  177409 system_pods.go:89] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running
	I1213 00:14:13.977238  177409 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running
	I1213 00:14:13.977249  177409 system_pods.go:89] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:13.977257  177409 system_pods.go:89] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running
	I1213 00:14:13.977272  177409 system_pods.go:126] duration metric: took 8.1502ms to wait for k8s-apps to be running ...
	I1213 00:14:13.977288  177409 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:13.977342  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:13.996304  177409 system_svc.go:56] duration metric: took 19.006856ms WaitForService to wait for kubelet.
	I1213 00:14:13.996340  177409 kubeadm.go:581] duration metric: took 4m20.846697962s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:13.996374  177409 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:14.000473  177409 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:14.000505  177409 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:14.000518  177409 node_conditions.go:105] duration metric: took 4.137212ms to run NodePressure ...
	I1213 00:14:14.000534  177409 start.go:228] waiting for startup goroutines ...
	I1213 00:14:14.000544  177409 start.go:233] waiting for cluster config update ...
	I1213 00:14:14.000561  177409 start.go:242] writing updated cluster config ...
	I1213 00:14:14.000901  177409 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:14.059785  177409 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1213 00:14:14.062155  177409 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-743278" cluster and "default" namespace by default
	I1213 00:14:11.212405  177307 out.go:204]   - Booting up control plane ...
	I1213 00:14:11.212538  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:14:11.213865  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:14:11.215312  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:14:11.235356  177307 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:14:11.236645  177307 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:14:11.236755  177307 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1213 00:14:11.385788  177307 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:14:12.284994  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:14.784159  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:19.387966  177307 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002219 seconds
	I1213 00:14:19.402873  177307 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:14:19.424220  177307 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:14:19.954243  177307 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:14:19.954453  177307 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-143586 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 00:14:20.468986  177307 kubeadm.go:322] [bootstrap-token] Using token: nss44e.j85t1ilri9kvvn0e
	I1213 00:14:16.785364  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:19.284214  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:20.470732  177307 out.go:204]   - Configuring RBAC rules ...
	I1213 00:14:20.470866  177307 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:14:20.479490  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 00:14:20.488098  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:14:20.491874  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:14:20.496891  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:14:20.506058  177307 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:14:20.523032  177307 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 00:14:20.796465  177307 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:14:20.892018  177307 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:14:20.892049  177307 kubeadm.go:322] 
	I1213 00:14:20.892159  177307 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:14:20.892185  177307 kubeadm.go:322] 
	I1213 00:14:20.892284  177307 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:14:20.892296  177307 kubeadm.go:322] 
	I1213 00:14:20.892338  177307 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:14:20.892421  177307 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:14:20.892512  177307 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:14:20.892529  177307 kubeadm.go:322] 
	I1213 00:14:20.892620  177307 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1213 00:14:20.892648  177307 kubeadm.go:322] 
	I1213 00:14:20.892734  177307 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 00:14:20.892745  177307 kubeadm.go:322] 
	I1213 00:14:20.892807  177307 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:14:20.892938  177307 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:14:20.893057  177307 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:14:20.893072  177307 kubeadm.go:322] 
	I1213 00:14:20.893182  177307 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 00:14:20.893286  177307 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:14:20.893307  177307 kubeadm.go:322] 
	I1213 00:14:20.893446  177307 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nss44e.j85t1ilri9kvvn0e \
	I1213 00:14:20.893588  177307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:14:20.893625  177307 kubeadm.go:322] 	--control-plane 
	I1213 00:14:20.893634  177307 kubeadm.go:322] 
	I1213 00:14:20.893740  177307 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:14:20.893752  177307 kubeadm.go:322] 
	I1213 00:14:20.893877  177307 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nss44e.j85t1ilri9kvvn0e \
	I1213 00:14:20.894017  177307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:14:20.895217  177307 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:14:20.895249  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:14:20.895261  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:14:20.897262  177307 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:14:20.898838  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:14:20.933446  177307 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:14:20.985336  177307 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:14:20.985435  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:20.985458  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=no-preload-143586 minikube.k8s.io/updated_at=2023_12_13T00_14_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.062513  177307 ops.go:34] apiserver oom_adj: -16
	I1213 00:14:21.374568  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.482135  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:22.088971  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:22.588816  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:23.088960  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:23.588701  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:24.088568  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.783473  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:23.784019  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:25.785712  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:24.588803  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:25.088983  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:25.589097  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:26.088561  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:26.589160  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:27.088601  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:27.588337  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.088578  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.588533  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:29.088398  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.284015  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:30.285509  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:29.588587  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:30.088826  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:30.588871  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:31.089336  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:31.588959  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:32.088390  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:32.589079  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:33.088948  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:33.589067  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:34.089108  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:34.261304  177307 kubeadm.go:1088] duration metric: took 13.275930767s to wait for elevateKubeSystemPrivileges.
	I1213 00:14:34.261367  177307 kubeadm.go:406] StartCluster complete in 5m12.573209179s
	I1213 00:14:34.261392  177307 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:14:34.261511  177307 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:14:34.264237  177307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:14:34.264668  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:14:34.264951  177307 config.go:182] Loaded profile config "no-preload-143586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:14:34.265065  177307 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:14:34.265128  177307 addons.go:69] Setting storage-provisioner=true in profile "no-preload-143586"
	I1213 00:14:34.265150  177307 addons.go:231] Setting addon storage-provisioner=true in "no-preload-143586"
	W1213 00:14:34.265161  177307 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:14:34.265202  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.265231  177307 addons.go:69] Setting default-storageclass=true in profile "no-preload-143586"
	I1213 00:14:34.265262  177307 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-143586"
	I1213 00:14:34.265606  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.265612  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.265627  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.265628  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.265846  177307 addons.go:69] Setting metrics-server=true in profile "no-preload-143586"
	I1213 00:14:34.265878  177307 addons.go:231] Setting addon metrics-server=true in "no-preload-143586"
	W1213 00:14:34.265890  177307 addons.go:240] addon metrics-server should already be in state true
	I1213 00:14:34.265935  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.266231  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.266277  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.287844  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34161
	I1213 00:14:34.287882  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I1213 00:14:34.287968  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
	I1213 00:14:34.288509  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.288529  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.288811  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.289178  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289197  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289310  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289325  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289335  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289347  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289707  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289713  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289736  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289891  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.290392  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.290398  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.290415  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.290417  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.293696  177307 addons.go:231] Setting addon default-storageclass=true in "no-preload-143586"
	W1213 00:14:34.293725  177307 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:14:34.293756  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.294150  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.294187  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.309103  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I1213 00:14:34.309683  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.310362  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.310387  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.310830  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.311091  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.312755  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37041
	I1213 00:14:34.313192  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.313601  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.313796  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.313814  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.316496  177307 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:14:34.314223  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.316102  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41945
	I1213 00:14:34.318112  177307 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:14:34.318127  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:14:34.318144  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.318260  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.318670  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.318693  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.319401  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.319422  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.319860  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.320080  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.321977  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.323695  177307 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:14:34.322509  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.325025  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:14:34.325037  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:14:34.325053  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.323731  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.325089  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.323250  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.325250  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.325428  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.325563  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.328055  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.328364  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.328386  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.328712  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.328867  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.328980  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.329099  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.339175  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35817
	I1213 00:14:34.339820  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.340300  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.340314  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.340662  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.340821  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.342399  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.342673  177307 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:14:34.342694  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:14:34.342720  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.345475  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.345804  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.345839  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.346062  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.346256  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.346453  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.346622  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.425634  177307 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-143586" context rescaled to 1 replicas
	I1213 00:14:34.425672  177307 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:14:34.427471  177307 out.go:177] * Verifying Kubernetes components...
	I1213 00:14:32.783642  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:34.786810  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:34.428983  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:34.589995  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:14:34.590692  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:14:34.592452  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:14:34.592472  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:14:34.643312  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:14:34.643336  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:14:34.649786  177307 node_ready.go:35] waiting up to 6m0s for node "no-preload-143586" to be "Ready" ...
	I1213 00:14:34.649926  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:14:34.683306  177307 node_ready.go:49] node "no-preload-143586" has status "Ready":"True"
	I1213 00:14:34.683339  177307 node_ready.go:38] duration metric: took 33.525188ms waiting for node "no-preload-143586" to be "Ready" ...
	I1213 00:14:34.683352  177307 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:34.711542  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:14:34.711570  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:14:34.738788  177307 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-8fb8b" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:34.823110  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:14:35.743550  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.153515373s)
	I1213 00:14:35.743618  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.743634  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.743661  177307 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.093703901s)
	I1213 00:14:35.743611  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.152891747s)
	I1213 00:14:35.743699  177307 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1213 00:14:35.743719  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.743732  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.744060  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.744059  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.744088  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.744100  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.744114  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.744114  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.744158  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.744195  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.744209  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.744223  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.745779  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.745829  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.745855  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.745838  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.745797  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.745790  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.757271  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.757292  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.757758  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.757776  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.757787  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:36.114702  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291538738s)
	I1213 00:14:36.114760  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:36.114773  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:36.115132  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:36.115149  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:36.115158  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:36.115168  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:36.115411  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:36.115426  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:36.115436  177307 addons.go:467] Verifying addon metrics-server=true in "no-preload-143586"
	I1213 00:14:36.117975  177307 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1213 00:14:36.119554  177307 addons.go:502] enable addons completed in 1.85448385s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1213 00:14:37.069993  177307 pod_ready.go:102] pod "coredns-76f75df574-8fb8b" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:38.563525  177307 pod_ready.go:92] pod "coredns-76f75df574-8fb8b" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.563551  177307 pod_ready.go:81] duration metric: took 3.824732725s waiting for pod "coredns-76f75df574-8fb8b" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.563561  177307 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hs9rg" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.565949  177307 pod_ready.go:97] error getting pod "coredns-76f75df574-hs9rg" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-hs9rg" not found
	I1213 00:14:38.565976  177307 pod_ready.go:81] duration metric: took 2.409349ms waiting for pod "coredns-76f75df574-hs9rg" in "kube-system" namespace to be "Ready" ...
	E1213 00:14:38.565984  177307 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-hs9rg" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-hs9rg" not found
	I1213 00:14:38.565990  177307 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.571396  177307 pod_ready.go:92] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.571416  177307 pod_ready.go:81] duration metric: took 5.419634ms waiting for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.571424  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.576228  177307 pod_ready.go:92] pod "kube-apiserver-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.576248  177307 pod_ready.go:81] duration metric: took 4.818853ms waiting for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.576256  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.581260  177307 pod_ready.go:92] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.581281  177307 pod_ready.go:81] duration metric: took 5.019621ms waiting for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.581289  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xsdtr" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.760984  177307 pod_ready.go:92] pod "kube-proxy-xsdtr" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.761006  177307 pod_ready.go:81] duration metric: took 179.711484ms waiting for pod "kube-proxy-xsdtr" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.761015  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:39.160713  177307 pod_ready.go:92] pod "kube-scheduler-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:39.160738  177307 pod_ready.go:81] duration metric: took 399.716844ms waiting for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:39.160746  177307 pod_ready.go:38] duration metric: took 4.477382003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:39.160762  177307 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:39.160809  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:39.176747  177307 api_server.go:72] duration metric: took 4.751030848s to wait for apiserver process to appear ...
	I1213 00:14:39.176774  177307 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:39.176791  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:14:39.183395  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 200:
	ok
	I1213 00:14:39.184769  177307 api_server.go:141] control plane version: v1.29.0-rc.2
	I1213 00:14:39.184789  177307 api_server.go:131] duration metric: took 8.009007ms to wait for apiserver health ...
	I1213 00:14:39.184799  177307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:39.364215  177307 system_pods.go:59] 8 kube-system pods found
	I1213 00:14:39.364251  177307 system_pods.go:61] "coredns-76f75df574-8fb8b" [3ca4e237-6a35-4e8f-9731-3f9655eba995] Running
	I1213 00:14:39.364256  177307 system_pods.go:61] "etcd-no-preload-143586" [205757f5-5bd7-416c-af72-6ebf428d7302] Running
	I1213 00:14:39.364260  177307 system_pods.go:61] "kube-apiserver-no-preload-143586" [a479caf2-8ff4-4b78-8d93-e8f672a853b9] Running
	I1213 00:14:39.364265  177307 system_pods.go:61] "kube-controller-manager-no-preload-143586" [4afd5e8a-64b3-4e0c-a723-b6a6bd2445d4] Running
	I1213 00:14:39.364269  177307 system_pods.go:61] "kube-proxy-xsdtr" [23a261a4-17d1-4657-8052-02b71055c850] Running
	I1213 00:14:39.364273  177307 system_pods.go:61] "kube-scheduler-no-preload-143586" [71312243-0ba6-40e0-80a5-c1652b5270e9] Running
	I1213 00:14:39.364280  177307 system_pods.go:61] "metrics-server-57f55c9bc5-q7v45" [1579f5c9-d574-4ab8-9add-e89621b9c203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:39.364284  177307 system_pods.go:61] "storage-provisioner" [400b27cc-1713-4201-8097-3e3fd8004690] Running
	I1213 00:14:39.364292  177307 system_pods.go:74] duration metric: took 179.488069ms to wait for pod list to return data ...
	I1213 00:14:39.364301  177307 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:39.560330  177307 default_sa.go:45] found service account: "default"
	I1213 00:14:39.560364  177307 default_sa.go:55] duration metric: took 196.056049ms for default service account to be created ...
	I1213 00:14:39.560376  177307 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:39.763340  177307 system_pods.go:86] 8 kube-system pods found
	I1213 00:14:39.763384  177307 system_pods.go:89] "coredns-76f75df574-8fb8b" [3ca4e237-6a35-4e8f-9731-3f9655eba995] Running
	I1213 00:14:39.763393  177307 system_pods.go:89] "etcd-no-preload-143586" [205757f5-5bd7-416c-af72-6ebf428d7302] Running
	I1213 00:14:39.763400  177307 system_pods.go:89] "kube-apiserver-no-preload-143586" [a479caf2-8ff4-4b78-8d93-e8f672a853b9] Running
	I1213 00:14:39.763405  177307 system_pods.go:89] "kube-controller-manager-no-preload-143586" [4afd5e8a-64b3-4e0c-a723-b6a6bd2445d4] Running
	I1213 00:14:39.763409  177307 system_pods.go:89] "kube-proxy-xsdtr" [23a261a4-17d1-4657-8052-02b71055c850] Running
	I1213 00:14:39.763414  177307 system_pods.go:89] "kube-scheduler-no-preload-143586" [71312243-0ba6-40e0-80a5-c1652b5270e9] Running
	I1213 00:14:39.763426  177307 system_pods.go:89] "metrics-server-57f55c9bc5-q7v45" [1579f5c9-d574-4ab8-9add-e89621b9c203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:39.763434  177307 system_pods.go:89] "storage-provisioner" [400b27cc-1713-4201-8097-3e3fd8004690] Running
	I1213 00:14:39.763449  177307 system_pods.go:126] duration metric: took 203.065345ms to wait for k8s-apps to be running ...
	I1213 00:14:39.763458  177307 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:39.763517  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:39.783072  177307 system_svc.go:56] duration metric: took 19.601725ms WaitForService to wait for kubelet.
	I1213 00:14:39.783120  177307 kubeadm.go:581] duration metric: took 5.357406192s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:39.783147  177307 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:39.962475  177307 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:39.962501  177307 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:39.962511  177307 node_conditions.go:105] duration metric: took 179.359327ms to run NodePressure ...
	I1213 00:14:39.962524  177307 start.go:228] waiting for startup goroutines ...
	I1213 00:14:39.962532  177307 start.go:233] waiting for cluster config update ...
	I1213 00:14:39.962544  177307 start.go:242] writing updated cluster config ...
	I1213 00:14:39.962816  177307 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:40.016206  177307 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1213 00:14:40.018375  177307 out.go:177] * Done! kubectl is now configured to use "no-preload-143586" cluster and "default" namespace by default
	I1213 00:14:37.286105  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:39.786060  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:42.285678  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:44.784213  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:47.285680  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:49.783428  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:51.785923  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:54.283780  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:56.783343  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:59.283053  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:15:00.976984  176813 pod_ready.go:81] duration metric: took 4m0.000041493s waiting for pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace to be "Ready" ...
	E1213 00:15:00.977016  176813 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:15:00.977037  176813 pod_ready.go:38] duration metric: took 4m1.19985839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:00.977064  176813 kubeadm.go:640] restartCluster took 5m6.659231001s
	W1213 00:15:00.977141  176813 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:15:00.977178  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:15:07.653665  176813 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.676456274s)
	I1213 00:15:07.653745  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:15:07.673981  176813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:15:07.688018  176813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:15:07.699196  176813 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:15:07.699244  176813 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1213 00:15:07.761890  176813 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1213 00:15:07.762010  176813 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:15:07.921068  176813 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:15:07.921220  176813 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:15:07.921360  176813 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:15:08.151937  176813 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:15:08.152063  176813 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:15:08.159296  176813 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1213 00:15:08.285060  176813 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:15:08.286903  176813 out.go:204]   - Generating certificates and keys ...
	I1213 00:15:08.287074  176813 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:15:08.287174  176813 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:15:08.290235  176813 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:15:08.290397  176813 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:15:08.290878  176813 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:15:08.291179  176813 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:15:08.291663  176813 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:15:08.292342  176813 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:15:08.292822  176813 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:15:08.293259  176813 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:15:08.293339  176813 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:15:08.293429  176813 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:15:08.526145  176813 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:15:08.586842  176813 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:15:08.636575  176813 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:15:08.706448  176813 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:15:08.710760  176813 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:15:08.713664  176813 out.go:204]   - Booting up control plane ...
	I1213 00:15:08.713773  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:15:08.718431  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:15:08.719490  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:15:08.720327  176813 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:15:08.722707  176813 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:15:19.226839  176813 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503804 seconds
	I1213 00:15:19.227005  176813 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:15:19.245054  176813 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:15:19.773910  176813 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:15:19.774100  176813 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-508612 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1213 00:15:20.284136  176813 kubeadm.go:322] [bootstrap-token] Using token: lgq05i.maaa534t8w734gvq
	I1213 00:15:20.286042  176813 out.go:204]   - Configuring RBAC rules ...
	I1213 00:15:20.286186  176813 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:15:20.297875  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:15:20.305644  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:15:20.314089  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:15:20.319091  176813 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:15:20.387872  176813 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:15:20.733546  176813 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:15:20.735072  176813 kubeadm.go:322] 
	I1213 00:15:20.735157  176813 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:15:20.735168  176813 kubeadm.go:322] 
	I1213 00:15:20.735280  176813 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:15:20.735291  176813 kubeadm.go:322] 
	I1213 00:15:20.735314  176813 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:15:20.735389  176813 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:15:20.735451  176813 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:15:20.735459  176813 kubeadm.go:322] 
	I1213 00:15:20.735517  176813 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:15:20.735602  176813 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:15:20.735660  176813 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:15:20.735666  176813 kubeadm.go:322] 
	I1213 00:15:20.735757  176813 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1213 00:15:20.735867  176813 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:15:20.735889  176813 kubeadm.go:322] 
	I1213 00:15:20.736036  176813 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lgq05i.maaa534t8w734gvq \
	I1213 00:15:20.736152  176813 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:15:20.736223  176813 kubeadm.go:322]     --control-plane 	  
	I1213 00:15:20.736240  176813 kubeadm.go:322] 
	I1213 00:15:20.736348  176813 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:15:20.736357  176813 kubeadm.go:322] 
	I1213 00:15:20.736472  176813 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lgq05i.maaa534t8w734gvq \
	I1213 00:15:20.736596  176813 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:15:20.737307  176813 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:15:20.737332  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:15:20.737340  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:15:20.739085  176813 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:15:20.740295  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:15:20.749618  176813 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:15:20.767876  176813 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:15:20.767933  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:20.767984  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=old-k8s-version-508612 minikube.k8s.io/updated_at=2023_12_13T00_15_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.051677  176813 ops.go:34] apiserver oom_adj: -16
	I1213 00:15:21.051709  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.148546  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.741424  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:22.240885  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:22.741651  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:23.241662  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:23.741098  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:24.241530  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:24.741035  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:25.241391  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:25.741004  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:26.241402  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:26.741333  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:27.241828  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:27.741151  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:28.240933  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:28.741661  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:29.241431  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:29.741667  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:30.241070  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:30.741117  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:31.241355  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:31.741697  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:32.241779  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:32.741165  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:33.241739  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:33.741499  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:34.241477  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:34.740804  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:35.241596  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:35.374344  176813 kubeadm.go:1088] duration metric: took 14.606462065s to wait for elevateKubeSystemPrivileges.
	I1213 00:15:35.374388  176813 kubeadm.go:406] StartCluster complete in 5m41.120911791s
	I1213 00:15:35.374416  176813 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:15:35.374522  176813 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:15:35.376587  176813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:15:35.376829  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:15:35.376896  176813 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:15:35.376998  176813 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377018  176813 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377026  176813 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-508612"
	W1213 00:15:35.377036  176813 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:15:35.377038  176813 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377075  176813 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-508612"
	W1213 00:15:35.377089  176813 addons.go:240] addon metrics-server should already be in state true
	I1213 00:15:35.377107  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.377140  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.377536  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.377569  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.377577  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.377603  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.377036  176813 config.go:182] Loaded profile config "old-k8s-version-508612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1213 00:15:35.377038  176813 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-508612"
	I1213 00:15:35.378232  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.378269  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.396758  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I1213 00:15:35.397242  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.397563  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
	I1213 00:15:35.397732  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I1213 00:15:35.398240  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.398249  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.398768  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.398789  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.398927  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.398944  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.399039  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.399048  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.399144  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399485  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399506  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399699  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.399783  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.399822  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.400014  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.400052  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.403424  176813 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-508612"
	W1213 00:15:35.403445  176813 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:15:35.403470  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.403784  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.403809  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.419742  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I1213 00:15:35.419763  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I1213 00:15:35.420351  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.420378  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.420912  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.420927  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.421042  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.421062  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.421403  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.421450  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.421588  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.421633  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.422473  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I1213 00:15:35.423216  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.423818  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.423875  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.423890  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.426328  176813 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:15:35.424310  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.424522  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.428333  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:15:35.428351  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:15:35.428377  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.430256  176813 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:15:35.428950  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.430439  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.431959  176813 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:15:35.431260  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.431816  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.432011  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.431977  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:15:35.432031  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.432047  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.432199  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.432359  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.432587  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.434239  176813 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-508612" context rescaled to 1 replicas
	I1213 00:15:35.434275  176813 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:15:35.435769  176813 out.go:177] * Verifying Kubernetes components...
	I1213 00:15:35.437082  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:15:35.434982  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.435627  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.437148  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.437186  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.437343  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.437515  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.437646  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.450115  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I1213 00:15:35.450582  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.451077  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.451104  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.451548  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.451822  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.453721  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.454034  176813 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:15:35.454052  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:15:35.454072  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.456976  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.457326  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.457351  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.457530  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.457709  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.457859  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.458008  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.599631  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:15:35.607268  176813 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-508612" to be "Ready" ...
	I1213 00:15:35.607407  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:15:35.627686  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:15:35.627720  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:15:35.641865  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:15:35.653972  176813 node_ready.go:49] node "old-k8s-version-508612" has status "Ready":"True"
	I1213 00:15:35.654008  176813 node_ready.go:38] duration metric: took 46.699606ms waiting for node "old-k8s-version-508612" to be "Ready" ...
	I1213 00:15:35.654022  176813 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:35.701904  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:15:35.701939  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:15:35.722752  176813 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:35.779684  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:15:35.779719  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:15:35.871071  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:15:36.486377  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486409  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486428  176813 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1213 00:15:36.486495  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486513  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486715  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.486725  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.486734  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486741  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486816  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.486826  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.486834  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486843  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.487015  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.487022  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.487048  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.487156  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.487172  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.487186  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.535004  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.535026  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.535335  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.535394  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.535407  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.671282  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.671308  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.671649  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.671719  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.671739  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.671758  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.671771  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.672067  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.672091  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.672092  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.672102  176813 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-508612"
	I1213 00:15:36.673881  176813 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1213 00:15:36.675200  176813 addons.go:502] enable addons completed in 1.298322525s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1213 00:15:37.860212  176813 pod_ready.go:102] pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace has status "Ready":"False"
	I1213 00:15:40.350347  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace has status "Ready":"True"
	I1213 00:15:40.350370  176813 pod_ready.go:81] duration metric: took 4.627584432s waiting for pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.350383  176813 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wz29m" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.356218  176813 pod_ready.go:92] pod "kube-proxy-wz29m" in "kube-system" namespace has status "Ready":"True"
	I1213 00:15:40.356240  176813 pod_ready.go:81] duration metric: took 5.84816ms waiting for pod "kube-proxy-wz29m" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.356252  176813 pod_ready.go:38] duration metric: took 4.702215033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:40.356270  176813 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:15:40.356324  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:15:40.372391  176813 api_server.go:72] duration metric: took 4.938079614s to wait for apiserver process to appear ...
	I1213 00:15:40.372424  176813 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:15:40.372459  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:15:40.378882  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1213 00:15:40.379747  176813 api_server.go:141] control plane version: v1.16.0
	I1213 00:15:40.379770  176813 api_server.go:131] duration metric: took 7.338199ms to wait for apiserver health ...
	I1213 00:15:40.379780  176813 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:15:40.383090  176813 system_pods.go:59] 4 kube-system pods found
	I1213 00:15:40.383110  176813 system_pods.go:61] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.383115  176813 system_pods.go:61] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.383121  176813 system_pods.go:61] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.383126  176813 system_pods.go:61] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.383133  176813 system_pods.go:74] duration metric: took 3.346988ms to wait for pod list to return data ...
	I1213 00:15:40.383140  176813 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:15:40.385822  176813 default_sa.go:45] found service account: "default"
	I1213 00:15:40.385843  176813 default_sa.go:55] duration metric: took 2.696485ms for default service account to be created ...
	I1213 00:15:40.385851  176813 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:15:40.390030  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.390056  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.390061  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.390068  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.390072  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.390094  176813 retry.go:31] will retry after 206.30305ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:40.602546  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.602577  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.602582  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.602589  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.602593  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.602611  176813 retry.go:31] will retry after 375.148566ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:40.987598  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.987626  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.987631  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.987639  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.987645  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.987663  176813 retry.go:31] will retry after 354.607581ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:41.347931  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:41.347965  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:41.347974  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:41.347984  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:41.347992  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:41.348012  176813 retry.go:31] will retry after 443.179207ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:41.796661  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:41.796687  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:41.796692  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:41.796711  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:41.796716  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:41.796733  176813 retry.go:31] will retry after 468.875458ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:42.271565  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:42.271591  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:42.271596  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:42.271603  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:42.271608  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:42.271624  176813 retry.go:31] will retry after 696.629881ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:42.974971  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:42.974997  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:42.975003  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:42.975009  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:42.975015  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:42.975031  176813 retry.go:31] will retry after 830.83436ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:43.810755  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:43.810784  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:43.810792  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:43.810802  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:43.810808  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:43.810830  176813 retry.go:31] will retry after 1.429308487s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:45.245813  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:45.245844  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:45.245852  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:45.245862  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:45.245867  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:45.245887  176813 retry.go:31] will retry after 1.715356562s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:46.966484  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:46.966512  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:46.966517  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:46.966523  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:46.966529  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:46.966546  176813 retry.go:31] will retry after 2.125852813s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:49.097419  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:49.097450  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:49.097460  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:49.097472  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:49.097478  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:49.097496  176813 retry.go:31] will retry after 2.902427415s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:52.005062  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:52.005097  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:52.005106  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:52.005119  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:52.005128  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:52.005154  176813 retry.go:31] will retry after 3.461524498s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:55.471450  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:55.471474  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:55.471480  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:55.471487  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:55.471492  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:55.471509  176813 retry.go:31] will retry after 2.969353102s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:58.445285  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:58.445316  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:58.445324  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:58.445334  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:58.445341  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:58.445363  176813 retry.go:31] will retry after 3.938751371s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:02.389811  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:02.389839  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:02.389845  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:02.389851  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:02.389856  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:02.389873  176813 retry.go:31] will retry after 5.281550171s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:07.676759  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:07.676786  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:07.676791  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:07.676798  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:07.676802  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:07.676820  176813 retry.go:31] will retry after 8.193775139s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:15.875917  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:15.875946  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:15.875951  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:15.875958  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:15.875962  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:15.875980  176813 retry.go:31] will retry after 8.515960159s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:24.397972  176813 system_pods.go:86] 5 kube-system pods found
	I1213 00:16:24.398006  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:24.398014  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:24.398021  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:24.398032  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:24.398039  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:24.398060  176813 retry.go:31] will retry after 10.707543157s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:35.112639  176813 system_pods.go:86] 7 kube-system pods found
	I1213 00:16:35.112667  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:35.112672  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:35.112677  176813 system_pods.go:89] "kube-controller-manager-old-k8s-version-508612" [c6a195a2-e710-4791-b0a3-32618d3c752c] Running
	I1213 00:16:35.112681  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:35.112685  176813 system_pods.go:89] "kube-scheduler-old-k8s-version-508612" [6ee728bb-d81b-43f2-8aa3-4848871a8f41] Running
	I1213 00:16:35.112691  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:35.112696  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:35.112712  176813 retry.go:31] will retry after 13.429366805s: missing components: kube-apiserver
	I1213 00:16:48.550673  176813 system_pods.go:86] 8 kube-system pods found
	I1213 00:16:48.550704  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:48.550710  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:48.550714  176813 system_pods.go:89] "kube-apiserver-old-k8s-version-508612" [1473501b-d17d-4bbb-a61a-1d244f54f70c] Running
	I1213 00:16:48.550718  176813 system_pods.go:89] "kube-controller-manager-old-k8s-version-508612" [c6a195a2-e710-4791-b0a3-32618d3c752c] Running
	I1213 00:16:48.550722  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:48.550726  176813 system_pods.go:89] "kube-scheduler-old-k8s-version-508612" [6ee728bb-d81b-43f2-8aa3-4848871a8f41] Running
	I1213 00:16:48.550733  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:48.550737  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:48.550747  176813 system_pods.go:126] duration metric: took 1m8.164889078s to wait for k8s-apps to be running ...
	I1213 00:16:48.550756  176813 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:16:48.550811  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:16:48.568833  176813 system_svc.go:56] duration metric: took 18.062353ms WaitForService to wait for kubelet.
	I1213 00:16:48.568876  176813 kubeadm.go:581] duration metric: took 1m13.134572871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:16:48.568901  176813 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:16:48.573103  176813 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:16:48.573128  176813 node_conditions.go:123] node cpu capacity is 2
	I1213 00:16:48.573137  176813 node_conditions.go:105] duration metric: took 4.231035ms to run NodePressure ...
	I1213 00:16:48.573148  176813 start.go:228] waiting for startup goroutines ...
	I1213 00:16:48.573154  176813 start.go:233] waiting for cluster config update ...
	I1213 00:16:48.573163  176813 start.go:242] writing updated cluster config ...
	I1213 00:16:48.573436  176813 ssh_runner.go:195] Run: rm -f paused
	I1213 00:16:48.627109  176813 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1213 00:16:48.628688  176813 out.go:177] 
	W1213 00:16:48.630154  176813 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1213 00:16:48.631498  176813 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1213 00:16:48.633089  176813 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-508612" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-13 00:08:49 UTC, ends at Wed 2023-12-13 00:23:41 UTC. --
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.678555581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427021678541219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=a96210f8-6ba1-476b-93cd-8ab721bb2cd7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.679069203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=16deaa1b-38df-47f5-9c57-529903ca362e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.679138741Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=16deaa1b-38df-47f5-9c57-529903ca362e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.679304201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20d184eec9f33c68b1c58d5ac3d3b42daefb6d790a40ececc0a0a6d73d8bc2e9,PodSandboxId:90f63c23ff82a176fbd5ae9ff4cf9646ae59e08df9e2bfcef103afdf0886a9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702426477398489232,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400b27cc-1713-4201-8097-3e3fd8004690,},Annotations:map[string]string{io.kubernetes.container.hash: 6f17d3a3,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceffe7d16ebcec4c8d253cf076d69fc6a43dfb849cdfbd6706173b65e02d0841,PodSandboxId:a0adcf9f70dca2dc0269cf7117fda0be3f5486f8c0bb61a4a815df1306e8fa74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702426477443508907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8fb8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e237-6a35-4e8f-9731-3f9655eba995,},Annotations:map[string]string{io.kubernetes.container.hash: c36be869,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3334e05facd9a7a9d9ec1b86f6f09eda9fa692c41aab91e3ca1d18a0c2971500,PodSandboxId:c20d2b9dfbb4d5ddc60f715204f68d40502dad7208d7d42ea8f725167d9d0f88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702426475844747005,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsdtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 23a261a4-17d1-4657-8052-02b71055c850,},Annotations:map[string]string{io.kubernetes.container.hash: 6fc22b81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc806049c60f6cc717021f3d310359e630e4bf642716e5c060b2058dc95dbea,PodSandboxId:59eda18ed3fd6ec48948543cafcd1ead163bcd4a2496c527f9422a1afe915f0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702426453649500376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
57ed38c59bb1eeeff2f421f353e43eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c70296c970b0afef07e25c03aa1cffa14aaaafbb70386e1226b02bc905fcfa,PodSandboxId:3228cefd0f55d3b94196b31ba7f566c849fa4a8dddd142a7563fc340ac25d103,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702426453596147554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 093c0e4cf8564d4cddfc63bf4c87f834,},Annotations:map
[string]string{io.kubernetes.container.hash: 73904251,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fdf95a89e82f9f27a55d5d118ca6097b4297a514064f52e1fff695f64c0dc6,PodSandboxId:672d6c90cedad4e30e5a1cbe87835b947fce4c90edd9d626368f25b0b6dcf1d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702426453267570927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8521eaa16e6be6fa4a18a323
13c16e0,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e7ea689cef4571673b59e61ef365d3b27d8c9e7a8b4cd6c2d27dc6869a3f35,PodSandboxId:b8c5203ff5e0bb437279f34785e33ca04f300e589eeab01c2f5a1c4ca07bcfef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702426453138159365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6f20bf7e0f43f404e4c0b920dda2219,},A
nnotations:map[string]string{io.kubernetes.container.hash: a4127c24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=16deaa1b-38df-47f5-9c57-529903ca362e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.724852345Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c6dc6f22-9122-4da3-aa67-8a69a3513828 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.724963521Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c6dc6f22-9122-4da3-aa67-8a69a3513828 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.727056288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e6a34053-846a-460f-9774-2442de004e18 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.727419146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427021727407045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=e6a34053-846a-460f-9774-2442de004e18 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.728059262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6fcd269a-8e20-4f27-844f-de4eba75204d name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.728133563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6fcd269a-8e20-4f27-844f-de4eba75204d name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.728297491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20d184eec9f33c68b1c58d5ac3d3b42daefb6d790a40ececc0a0a6d73d8bc2e9,PodSandboxId:90f63c23ff82a176fbd5ae9ff4cf9646ae59e08df9e2bfcef103afdf0886a9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702426477398489232,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400b27cc-1713-4201-8097-3e3fd8004690,},Annotations:map[string]string{io.kubernetes.container.hash: 6f17d3a3,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceffe7d16ebcec4c8d253cf076d69fc6a43dfb849cdfbd6706173b65e02d0841,PodSandboxId:a0adcf9f70dca2dc0269cf7117fda0be3f5486f8c0bb61a4a815df1306e8fa74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702426477443508907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8fb8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e237-6a35-4e8f-9731-3f9655eba995,},Annotations:map[string]string{io.kubernetes.container.hash: c36be869,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3334e05facd9a7a9d9ec1b86f6f09eda9fa692c41aab91e3ca1d18a0c2971500,PodSandboxId:c20d2b9dfbb4d5ddc60f715204f68d40502dad7208d7d42ea8f725167d9d0f88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702426475844747005,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsdtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 23a261a4-17d1-4657-8052-02b71055c850,},Annotations:map[string]string{io.kubernetes.container.hash: 6fc22b81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc806049c60f6cc717021f3d310359e630e4bf642716e5c060b2058dc95dbea,PodSandboxId:59eda18ed3fd6ec48948543cafcd1ead163bcd4a2496c527f9422a1afe915f0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702426453649500376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
57ed38c59bb1eeeff2f421f353e43eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c70296c970b0afef07e25c03aa1cffa14aaaafbb70386e1226b02bc905fcfa,PodSandboxId:3228cefd0f55d3b94196b31ba7f566c849fa4a8dddd142a7563fc340ac25d103,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702426453596147554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 093c0e4cf8564d4cddfc63bf4c87f834,},Annotations:map
[string]string{io.kubernetes.container.hash: 73904251,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fdf95a89e82f9f27a55d5d118ca6097b4297a514064f52e1fff695f64c0dc6,PodSandboxId:672d6c90cedad4e30e5a1cbe87835b947fce4c90edd9d626368f25b0b6dcf1d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702426453267570927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8521eaa16e6be6fa4a18a323
13c16e0,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e7ea689cef4571673b59e61ef365d3b27d8c9e7a8b4cd6c2d27dc6869a3f35,PodSandboxId:b8c5203ff5e0bb437279f34785e33ca04f300e589eeab01c2f5a1c4ca07bcfef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702426453138159365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6f20bf7e0f43f404e4c0b920dda2219,},A
nnotations:map[string]string{io.kubernetes.container.hash: a4127c24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6fcd269a-8e20-4f27-844f-de4eba75204d name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.776507795Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a9d596f4-4624-41b0-b52a-5a1e4bc8afa0 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.776602906Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a9d596f4-4624-41b0-b52a-5a1e4bc8afa0 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.778544181Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9cd6aaea-afe9-4b83-9310-e4b152de14c5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.778858089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427021778845513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=9cd6aaea-afe9-4b83-9310-e4b152de14c5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.779645776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ea7f26c5-9da4-40dd-8fef-51872c0cdc58 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.779691141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ea7f26c5-9da4-40dd-8fef-51872c0cdc58 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.779850952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20d184eec9f33c68b1c58d5ac3d3b42daefb6d790a40ececc0a0a6d73d8bc2e9,PodSandboxId:90f63c23ff82a176fbd5ae9ff4cf9646ae59e08df9e2bfcef103afdf0886a9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702426477398489232,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400b27cc-1713-4201-8097-3e3fd8004690,},Annotations:map[string]string{io.kubernetes.container.hash: 6f17d3a3,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceffe7d16ebcec4c8d253cf076d69fc6a43dfb849cdfbd6706173b65e02d0841,PodSandboxId:a0adcf9f70dca2dc0269cf7117fda0be3f5486f8c0bb61a4a815df1306e8fa74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702426477443508907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8fb8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e237-6a35-4e8f-9731-3f9655eba995,},Annotations:map[string]string{io.kubernetes.container.hash: c36be869,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3334e05facd9a7a9d9ec1b86f6f09eda9fa692c41aab91e3ca1d18a0c2971500,PodSandboxId:c20d2b9dfbb4d5ddc60f715204f68d40502dad7208d7d42ea8f725167d9d0f88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702426475844747005,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsdtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 23a261a4-17d1-4657-8052-02b71055c850,},Annotations:map[string]string{io.kubernetes.container.hash: 6fc22b81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc806049c60f6cc717021f3d310359e630e4bf642716e5c060b2058dc95dbea,PodSandboxId:59eda18ed3fd6ec48948543cafcd1ead163bcd4a2496c527f9422a1afe915f0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702426453649500376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
57ed38c59bb1eeeff2f421f353e43eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c70296c970b0afef07e25c03aa1cffa14aaaafbb70386e1226b02bc905fcfa,PodSandboxId:3228cefd0f55d3b94196b31ba7f566c849fa4a8dddd142a7563fc340ac25d103,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702426453596147554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 093c0e4cf8564d4cddfc63bf4c87f834,},Annotations:map
[string]string{io.kubernetes.container.hash: 73904251,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fdf95a89e82f9f27a55d5d118ca6097b4297a514064f52e1fff695f64c0dc6,PodSandboxId:672d6c90cedad4e30e5a1cbe87835b947fce4c90edd9d626368f25b0b6dcf1d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702426453267570927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8521eaa16e6be6fa4a18a323
13c16e0,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e7ea689cef4571673b59e61ef365d3b27d8c9e7a8b4cd6c2d27dc6869a3f35,PodSandboxId:b8c5203ff5e0bb437279f34785e33ca04f300e589eeab01c2f5a1c4ca07bcfef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702426453138159365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6f20bf7e0f43f404e4c0b920dda2219,},A
nnotations:map[string]string{io.kubernetes.container.hash: a4127c24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ea7f26c5-9da4-40dd-8fef-51872c0cdc58 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.822980746Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ea43bbd0-96e7-4deb-8cd9-0a9f248de072 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.823129038Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ea43bbd0-96e7-4deb-8cd9-0a9f248de072 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.824759508Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9114a129-f419-40cc-9c68-2287802625a9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.825189909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427021825172970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=9114a129-f419-40cc-9c68-2287802625a9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.825802501Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=213e8349-3109-46c2-9b0e-55ff42a3a779 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.825848633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=213e8349-3109-46c2-9b0e-55ff42a3a779 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:23:41 no-preload-143586 crio[734]: time="2023-12-13 00:23:41.826085585Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20d184eec9f33c68b1c58d5ac3d3b42daefb6d790a40ececc0a0a6d73d8bc2e9,PodSandboxId:90f63c23ff82a176fbd5ae9ff4cf9646ae59e08df9e2bfcef103afdf0886a9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702426477398489232,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400b27cc-1713-4201-8097-3e3fd8004690,},Annotations:map[string]string{io.kubernetes.container.hash: 6f17d3a3,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceffe7d16ebcec4c8d253cf076d69fc6a43dfb849cdfbd6706173b65e02d0841,PodSandboxId:a0adcf9f70dca2dc0269cf7117fda0be3f5486f8c0bb61a4a815df1306e8fa74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702426477443508907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8fb8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e237-6a35-4e8f-9731-3f9655eba995,},Annotations:map[string]string{io.kubernetes.container.hash: c36be869,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3334e05facd9a7a9d9ec1b86f6f09eda9fa692c41aab91e3ca1d18a0c2971500,PodSandboxId:c20d2b9dfbb4d5ddc60f715204f68d40502dad7208d7d42ea8f725167d9d0f88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702426475844747005,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsdtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 23a261a4-17d1-4657-8052-02b71055c850,},Annotations:map[string]string{io.kubernetes.container.hash: 6fc22b81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc806049c60f6cc717021f3d310359e630e4bf642716e5c060b2058dc95dbea,PodSandboxId:59eda18ed3fd6ec48948543cafcd1ead163bcd4a2496c527f9422a1afe915f0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702426453649500376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
57ed38c59bb1eeeff2f421f353e43eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c70296c970b0afef07e25c03aa1cffa14aaaafbb70386e1226b02bc905fcfa,PodSandboxId:3228cefd0f55d3b94196b31ba7f566c849fa4a8dddd142a7563fc340ac25d103,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702426453596147554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 093c0e4cf8564d4cddfc63bf4c87f834,},Annotations:map
[string]string{io.kubernetes.container.hash: 73904251,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fdf95a89e82f9f27a55d5d118ca6097b4297a514064f52e1fff695f64c0dc6,PodSandboxId:672d6c90cedad4e30e5a1cbe87835b947fce4c90edd9d626368f25b0b6dcf1d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702426453267570927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8521eaa16e6be6fa4a18a323
13c16e0,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e7ea689cef4571673b59e61ef365d3b27d8c9e7a8b4cd6c2d27dc6869a3f35,PodSandboxId:b8c5203ff5e0bb437279f34785e33ca04f300e589eeab01c2f5a1c4ca07bcfef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702426453138159365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6f20bf7e0f43f404e4c0b920dda2219,},A
nnotations:map[string]string{io.kubernetes.container.hash: a4127c24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=213e8349-3109-46c2-9b0e-55ff42a3a779 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ceffe7d16ebce       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   a0adcf9f70dca       coredns-76f75df574-8fb8b
	20d184eec9f33       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   90f63c23ff82a       storage-provisioner
	3334e05facd9a       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   9 minutes ago       Running             kube-proxy                0                   c20d2b9dfbb4d       kube-proxy-xsdtr
	adc806049c60f       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   9 minutes ago       Running             kube-scheduler            2                   59eda18ed3fd6       kube-scheduler-no-preload-143586
	81c70296c970b       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   3228cefd0f55d       etcd-no-preload-143586
	00fdf95a89e82       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   9 minutes ago       Running             kube-controller-manager   2                   672d6c90cedad       kube-controller-manager-no-preload-143586
	55e7ea689cef4       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   9 minutes ago       Running             kube-apiserver            2                   b8c5203ff5e0b       kube-apiserver-no-preload-143586
	
	* 
	* ==> coredns [ceffe7d16ebcec4c8d253cf076d69fc6a43dfb849cdfbd6706173b65e02d0841] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60715 - 30942 "HINFO IN 339424797621679506.3135540895672571054. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014554846s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-143586
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-143586
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=no-preload-143586
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_13T00_14_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Dec 2023 00:14:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-143586
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Dec 2023 00:23:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Dec 2023 00:19:47 +0000   Wed, 13 Dec 2023 00:14:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Dec 2023 00:19:47 +0000   Wed, 13 Dec 2023 00:14:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Dec 2023 00:19:47 +0000   Wed, 13 Dec 2023 00:14:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Dec 2023 00:19:47 +0000   Wed, 13 Dec 2023 00:14:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.181
	  Hostname:    no-preload-143586
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb85d1675f224cc781a112e54bad3e44
	  System UUID:                bb85d167-5f22-4cc7-81a1-12e54bad3e44
	  Boot ID:                    9f621f45-b0f5-4147-b31b-e0050ecf5f7e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-8fb8b                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-no-preload-143586                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-no-preload-143586             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-controller-manager-no-preload-143586    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-xsdtr                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-no-preload-143586             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 metrics-server-57f55c9bc5-q7v45              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m22s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s  kubelet          Node no-preload-143586 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s  kubelet          Node no-preload-143586 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s  kubelet          Node no-preload-143586 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s  kubelet          Node no-preload-143586 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m21s  kubelet          Node no-preload-143586 status is now: NodeReady
	  Normal  RegisteredNode           9m8s   node-controller  Node no-preload-143586 event: Registered Node no-preload-143586 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec13 00:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069424] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.487979] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.521211] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150081] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.459502] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec13 00:09] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.129537] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.148990] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[  +0.104166] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[  +0.223552] systemd-fstab-generator[720]: Ignoring "noauto" for root device
	[ +29.768468] systemd-fstab-generator[1347]: Ignoring "noauto" for root device
	[ +14.547022] hrtimer: interrupt took 5637904 ns
	[  +5.638925] kauditd_printk_skb: 29 callbacks suppressed
	[Dec13 00:14] systemd-fstab-generator[3994]: Ignoring "noauto" for root device
	[  +9.302037] systemd-fstab-generator[4324]: Ignoring "noauto" for root device
	[ +15.695625] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [81c70296c970b0afef07e25c03aa1cffa14aaaafbb70386e1226b02bc905fcfa] <==
	* {"level":"info","ts":"2023-12-13T00:14:15.58867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86051bbfebcbb1c3 switched to configuration voters=(9657155487074595267)"}
	{"level":"info","ts":"2023-12-13T00:14:15.58878Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8eb1120df9352a4b","local-member-id":"86051bbfebcbb1c3","added-peer-id":"86051bbfebcbb1c3","added-peer-peer-urls":["https://192.168.50.181:2380"]}
	{"level":"info","ts":"2023-12-13T00:14:15.609289Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.181:2380"}
	{"level":"info","ts":"2023-12-13T00:14:15.609456Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.181:2380"}
	{"level":"info","ts":"2023-12-13T00:14:15.60896Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-13T00:14:15.615077Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"86051bbfebcbb1c3","initial-advertise-peer-urls":["https://192.168.50.181:2380"],"listen-peer-urls":["https://192.168.50.181:2380"],"advertise-client-urls":["https://192.168.50.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-13T00:14:15.615311Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-13T00:14:15.946487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86051bbfebcbb1c3 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-13T00:14:15.94655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86051bbfebcbb1c3 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-13T00:14:15.946576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86051bbfebcbb1c3 received MsgPreVoteResp from 86051bbfebcbb1c3 at term 1"}
	{"level":"info","ts":"2023-12-13T00:14:15.946587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86051bbfebcbb1c3 became candidate at term 2"}
	{"level":"info","ts":"2023-12-13T00:14:15.946593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86051bbfebcbb1c3 received MsgVoteResp from 86051bbfebcbb1c3 at term 2"}
	{"level":"info","ts":"2023-12-13T00:14:15.946604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86051bbfebcbb1c3 became leader at term 2"}
	{"level":"info","ts":"2023-12-13T00:14:15.946612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 86051bbfebcbb1c3 elected leader 86051bbfebcbb1c3 at term 2"}
	{"level":"info","ts":"2023-12-13T00:14:15.948126Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"86051bbfebcbb1c3","local-member-attributes":"{Name:no-preload-143586 ClientURLs:[https://192.168.50.181:2379]}","request-path":"/0/members/86051bbfebcbb1c3/attributes","cluster-id":"8eb1120df9352a4b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-13T00:14:15.948192Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-13T00:14:15.948512Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:14:15.948707Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-13T00:14:15.950842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.181:2379"}
	{"level":"info","ts":"2023-12-13T00:14:15.951359Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8eb1120df9352a4b","local-member-id":"86051bbfebcbb1c3","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:14:15.955197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-13T00:14:15.955327Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:14:15.955396Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:14:15.951433Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-13T00:14:15.955441Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:23:42 up 15 min,  0 users,  load average: 0.09, 0.30, 0.30
	Linux no-preload-143586 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [55e7ea689cef4571673b59e61ef365d3b27d8c9e7a8b4cd6c2d27dc6869a3f35] <==
	* I1213 00:17:36.812708       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:19:17.568939       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:19:17.569111       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1213 00:19:18.570117       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:19:18.570247       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:19:18.570312       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:19:18.570115       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:19:18.570458       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:19:18.572416       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:20:18.571540       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:20:18.571626       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:20:18.571636       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:20:18.572761       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:20:18.572872       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:20:18.572905       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:22:18.572300       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:22:18.572563       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:22:18.572598       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:22:18.573537       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:22:18.573600       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:22:18.573633       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [00fdf95a89e82f9f27a55d5d118ca6097b4297a514064f52e1fff695f64c0dc6] <==
	* I1213 00:18:05.991975       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="106.136µs"
	E1213 00:18:34.187237       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:18:34.636847       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:19:04.194906       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:19:04.647480       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:19:34.201203       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:19:34.657257       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:20:04.207592       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:20:04.668688       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:20:34.215975       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:20:34.677717       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1213 00:20:36.992080       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="222.027µs"
	I1213 00:20:50.992892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="192.943µs"
	E1213 00:21:04.224439       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:21:04.687617       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:21:34.230561       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:21:34.697360       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:22:04.236128       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:22:04.708805       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:22:34.244053       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:22:34.718282       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:23:04.251209       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:23:04.729109       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:23:34.257283       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:23:34.742757       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [3334e05facd9a7a9d9ec1b86f6f09eda9fa692c41aab91e3ca1d18a0c2971500] <==
	* I1213 00:14:36.334979       1 server_others.go:72] "Using iptables proxy"
	I1213 00:14:36.358938       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.181"]
	I1213 00:14:37.133745       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1213 00:14:37.133799       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 00:14:37.133814       1 server_others.go:168] "Using iptables Proxier"
	I1213 00:14:37.182252       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 00:14:37.182623       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I1213 00:14:37.182710       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 00:14:37.187986       1 config.go:188] "Starting service config controller"
	I1213 00:14:37.189570       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 00:14:37.189982       1 config.go:97] "Starting endpoint slice config controller"
	I1213 00:14:37.194743       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 00:14:37.190947       1 config.go:315] "Starting node config controller"
	I1213 00:14:37.197376       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 00:14:37.197679       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1213 00:14:37.291344       1 shared_informer.go:318] Caches are synced for service config
	I1213 00:14:37.297509       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [adc806049c60f6cc717021f3d310359e630e4bf642716e5c060b2058dc95dbea] <==
	* W1213 00:14:17.578194       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 00:14:17.578238       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1213 00:14:18.388401       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 00:14:18.388469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1213 00:14:18.445484       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 00:14:18.445537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1213 00:14:18.588535       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1213 00:14:18.588595       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1213 00:14:18.658206       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1213 00:14:18.658297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1213 00:14:18.763798       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 00:14:18.763929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1213 00:14:18.774178       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 00:14:18.774265       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 00:14:18.775325       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 00:14:18.775463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1213 00:14:18.790757       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 00:14:18.790831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1213 00:14:18.806847       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 00:14:18.806929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1213 00:14:18.834704       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 00:14:18.834782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1213 00:14:18.860057       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 00:14:18.860110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1213 00:14:20.846751       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-13 00:08:49 UTC, ends at Wed 2023-12-13 00:23:42 UTC. --
	Dec 13 00:20:50 no-preload-143586 kubelet[4331]: E1213 00:20:50.976488    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:21:04 no-preload-143586 kubelet[4331]: E1213 00:21:04.975173    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:21:19 no-preload-143586 kubelet[4331]: E1213 00:21:19.975276    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:21:21 no-preload-143586 kubelet[4331]: E1213 00:21:21.076244    4331 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:21:21 no-preload-143586 kubelet[4331]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:21:21 no-preload-143586 kubelet[4331]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:21:21 no-preload-143586 kubelet[4331]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:21:30 no-preload-143586 kubelet[4331]: E1213 00:21:30.975302    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:21:44 no-preload-143586 kubelet[4331]: E1213 00:21:44.978513    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:21:59 no-preload-143586 kubelet[4331]: E1213 00:21:59.975427    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:22:13 no-preload-143586 kubelet[4331]: E1213 00:22:13.974579    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:22:21 no-preload-143586 kubelet[4331]: E1213 00:22:21.076320    4331 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:22:21 no-preload-143586 kubelet[4331]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:22:21 no-preload-143586 kubelet[4331]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:22:21 no-preload-143586 kubelet[4331]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:22:25 no-preload-143586 kubelet[4331]: E1213 00:22:25.974315    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:22:40 no-preload-143586 kubelet[4331]: E1213 00:22:40.975240    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:22:51 no-preload-143586 kubelet[4331]: E1213 00:22:51.975558    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:23:06 no-preload-143586 kubelet[4331]: E1213 00:23:06.978772    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:23:21 no-preload-143586 kubelet[4331]: E1213 00:23:21.077602    4331 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:23:21 no-preload-143586 kubelet[4331]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:23:21 no-preload-143586 kubelet[4331]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:23:21 no-preload-143586 kubelet[4331]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:23:21 no-preload-143586 kubelet[4331]: E1213 00:23:21.975086    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:23:32 no-preload-143586 kubelet[4331]: E1213 00:23:32.976875    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	
	* 
	* ==> storage-provisioner [20d184eec9f33c68b1c58d5ac3d3b42daefb6d790a40ececc0a0a6d73d8bc2e9] <==
	* I1213 00:14:37.692054       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 00:14:37.715881       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 00:14:37.716142       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 00:14:37.734671       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 00:14:37.736468       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-143586_ae84fa56-b1aa-4927-81cb-7ec9d3faeeb5!
	I1213 00:14:37.737122       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"198e3ab0-405c-4add-9058-2aa3fd8d2473", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-143586_ae84fa56-b1aa-4927-81cb-7ec9d3faeeb5 became leader
	I1213 00:14:37.837619       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-143586_ae84fa56-b1aa-4927-81cb-7ec9d3faeeb5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-143586 -n no-preload-143586
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-143586 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-q7v45
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-143586 describe pod metrics-server-57f55c9bc5-q7v45
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-143586 describe pod metrics-server-57f55c9bc5-q7v45: exit status 1 (69.670544ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-q7v45" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-143586 describe pod metrics-server-57f55c9bc5-q7v45: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1213 00:17:45.320933  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1213 00:19:27.616540  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1213 00:20:11.804877  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1213 00:20:50.664102  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1213 00:22:45.320559  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-508612 -n old-k8s-version-508612
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-13 00:25:49.227561174 +0000 UTC m=+5472.783698714
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-508612 -n old-k8s-version-508612
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-508612 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-508612 logs -n 25: (1.749480419s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-380248                              | cert-expiration-380248       | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p pause-042245                                        | pause-042245                 | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343019 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	|         | disable-driver-mounts-343019                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:01 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-508612        | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-335807            | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143586             | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-743278  | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC | 13 Dec 23 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC |                     |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-508612             | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC | 13 Dec 23 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-335807                 | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143586                  | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-743278       | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/13 00:04:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 00:04:40.034430  177409 out.go:296] Setting OutFile to fd 1 ...
	I1213 00:04:40.034592  177409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:04:40.034601  177409 out.go:309] Setting ErrFile to fd 2...
	I1213 00:04:40.034606  177409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:04:40.034805  177409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1213 00:04:40.035357  177409 out.go:303] Setting JSON to false
	I1213 00:04:40.036280  177409 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10028,"bootTime":1702415852,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 00:04:40.036342  177409 start.go:138] virtualization: kvm guest
	I1213 00:04:40.038707  177409 out.go:177] * [default-k8s-diff-port-743278] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 00:04:40.040139  177409 out.go:177]   - MINIKUBE_LOCATION=17777
	I1213 00:04:40.040129  177409 notify.go:220] Checking for updates...
	I1213 00:04:40.041788  177409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 00:04:40.043246  177409 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:04:40.044627  177409 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1213 00:04:40.046091  177409 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 00:04:40.047562  177409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 00:04:40.049427  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:04:40.049930  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:04:40.049979  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:04:40.064447  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44247
	I1213 00:04:40.064825  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:04:40.065333  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:04:40.065352  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:04:40.065686  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:04:40.065850  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:04:40.066092  177409 driver.go:392] Setting default libvirt URI to qemu:///system
	I1213 00:04:40.066357  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:04:40.066389  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:04:40.080217  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38499
	I1213 00:04:40.080643  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:04:40.081072  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:04:40.081098  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:04:40.081436  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:04:40.081622  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:04:40.114108  177409 out.go:177] * Using the kvm2 driver based on existing profile
	I1213 00:04:40.115585  177409 start.go:298] selected driver: kvm2
	I1213 00:04:40.115603  177409 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:04:40.115714  177409 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 00:04:40.116379  177409 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:04:40.116485  177409 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 00:04:40.131964  177409 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1213 00:04:40.132324  177409 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 00:04:40.132392  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:04:40.132405  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:04:40.132416  177409 start_flags.go:323] config:
	{Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-74327
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:04:40.132599  177409 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:04:40.135330  177409 out.go:177] * Starting control plane node default-k8s-diff-port-743278 in cluster default-k8s-diff-port-743278
	I1213 00:04:36.772718  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:39.844694  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:40.136912  177409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:04:40.136959  177409 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1213 00:04:40.136972  177409 cache.go:56] Caching tarball of preloaded images
	I1213 00:04:40.137094  177409 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 00:04:40.137108  177409 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1213 00:04:40.137215  177409 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/config.json ...
	I1213 00:04:40.137413  177409 start.go:365] acquiring machines lock for default-k8s-diff-port-743278: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:04:45.924700  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:48.996768  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:55.076732  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:58.148779  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:04.228721  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:07.300700  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:13.380743  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:16.452690  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:22.532695  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:25.604771  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:31.684681  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:34.756720  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:40.836697  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:43.908711  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:49.988729  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:53.060691  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:59.140737  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:02.212709  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:08.292717  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:11.364746  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:17.444722  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:20.516796  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:26.596650  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:29.668701  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:35.748723  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:38.820688  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:44.900719  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:47.972683  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:54.052708  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:57.124684  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:03.204728  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:06.276720  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:12.356681  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:15.428743  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:21.508696  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:24.580749  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:30.660747  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:33.732746  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:39.812738  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:42.884767  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:48.964744  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:52.036691  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:58.116726  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:01.188638  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:07.268756  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:10.340725  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:13.345031  177122 start.go:369] acquired machines lock for "embed-certs-335807" in 4m2.39512191s
	I1213 00:08:13.345120  177122 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:08:13.345129  177122 fix.go:54] fixHost starting: 
	I1213 00:08:13.345524  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:08:13.345564  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:08:13.360333  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I1213 00:08:13.360832  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:08:13.361366  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:08:13.361390  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:08:13.361769  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:08:13.361941  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:13.362104  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:08:13.363919  177122 fix.go:102] recreateIfNeeded on embed-certs-335807: state=Stopped err=<nil>
	I1213 00:08:13.363938  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	W1213 00:08:13.364125  177122 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:08:13.366077  177122 out.go:177] * Restarting existing kvm2 VM for "embed-certs-335807" ...
	I1213 00:08:13.342763  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:08:13.342804  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:08:13.344878  176813 machine.go:91] provisioned docker machine in 4m37.409041046s
	I1213 00:08:13.344942  176813 fix.go:56] fixHost completed within 4m37.430106775s
	I1213 00:08:13.344949  176813 start.go:83] releasing machines lock for "old-k8s-version-508612", held for 4m37.430132032s
	W1213 00:08:13.344965  176813 start.go:694] error starting host: provision: host is not running
	W1213 00:08:13.345107  176813 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1213 00:08:13.345120  176813 start.go:709] Will try again in 5 seconds ...
	I1213 00:08:13.367310  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Start
	I1213 00:08:13.367451  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring networks are active...
	I1213 00:08:13.368551  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring network default is active
	I1213 00:08:13.368936  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring network mk-embed-certs-335807 is active
	I1213 00:08:13.369290  177122 main.go:141] libmachine: (embed-certs-335807) Getting domain xml...
	I1213 00:08:13.369993  177122 main.go:141] libmachine: (embed-certs-335807) Creating domain...
	I1213 00:08:14.617766  177122 main.go:141] libmachine: (embed-certs-335807) Waiting to get IP...
	I1213 00:08:14.618837  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:14.619186  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:14.619322  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:14.619202  177987 retry.go:31] will retry after 226.757968ms: waiting for machine to come up
	I1213 00:08:14.847619  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:14.847962  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:14.847996  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:14.847892  177987 retry.go:31] will retry after 390.063287ms: waiting for machine to come up
	I1213 00:08:15.239515  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:15.239906  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:15.239939  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:15.239845  177987 retry.go:31] will retry after 341.644988ms: waiting for machine to come up
	I1213 00:08:15.583408  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:15.583848  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:15.583878  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:15.583796  177987 retry.go:31] will retry after 420.722896ms: waiting for machine to come up
	I1213 00:08:18.346616  176813 start.go:365] acquiring machines lock for old-k8s-version-508612: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:08:16.006364  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:16.006767  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:16.006803  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:16.006713  177987 retry.go:31] will retry after 548.041925ms: waiting for machine to come up
	I1213 00:08:16.556444  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:16.556880  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:16.556912  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:16.556833  177987 retry.go:31] will retry after 862.959808ms: waiting for machine to come up
	I1213 00:08:17.421147  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:17.421596  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:17.421630  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:17.421544  177987 retry.go:31] will retry after 1.085782098s: waiting for machine to come up
	I1213 00:08:18.509145  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:18.509595  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:18.509619  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:18.509556  177987 retry.go:31] will retry after 1.303432656s: waiting for machine to come up
	I1213 00:08:19.814985  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:19.815430  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:19.815473  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:19.815367  177987 retry.go:31] will retry after 1.337474429s: waiting for machine to come up
	I1213 00:08:21.154792  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:21.155213  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:21.155236  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:21.155165  177987 retry.go:31] will retry after 2.104406206s: waiting for machine to come up
	I1213 00:08:23.262615  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:23.263144  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:23.263174  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:23.263066  177987 retry.go:31] will retry after 2.064696044s: waiting for machine to come up
	I1213 00:08:25.330105  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:25.330586  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:25.330621  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:25.330544  177987 retry.go:31] will retry after 2.270537288s: waiting for machine to come up
	I1213 00:08:27.602267  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:27.602787  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:27.602810  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:27.602758  177987 retry.go:31] will retry after 3.020844169s: waiting for machine to come up
	I1213 00:08:30.626232  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:30.626696  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:30.626731  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:30.626645  177987 retry.go:31] will retry after 5.329279261s: waiting for machine to come up
	I1213 00:08:37.405257  177307 start.go:369] acquired machines lock for "no-preload-143586" in 4m8.02482326s
	I1213 00:08:37.405329  177307 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:08:37.405340  177307 fix.go:54] fixHost starting: 
	I1213 00:08:37.405777  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:08:37.405830  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:08:37.422055  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41433
	I1213 00:08:37.422558  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:08:37.423112  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:08:37.423143  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:08:37.423462  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:08:37.423650  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:08:37.423795  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:08:37.425302  177307 fix.go:102] recreateIfNeeded on no-preload-143586: state=Stopped err=<nil>
	I1213 00:08:37.425345  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	W1213 00:08:37.425519  177307 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:08:37.428723  177307 out.go:177] * Restarting existing kvm2 VM for "no-preload-143586" ...
	I1213 00:08:35.958579  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.959166  177122 main.go:141] libmachine: (embed-certs-335807) Found IP for machine: 192.168.61.249
	I1213 00:08:35.959200  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has current primary IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.959212  177122 main.go:141] libmachine: (embed-certs-335807) Reserving static IP address...
	I1213 00:08:35.959676  177122 main.go:141] libmachine: (embed-certs-335807) Reserved static IP address: 192.168.61.249
	I1213 00:08:35.959731  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "embed-certs-335807", mac: "52:54:00:20:1b:c0", ip: "192.168.61.249"} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:35.959746  177122 main.go:141] libmachine: (embed-certs-335807) Waiting for SSH to be available...
	I1213 00:08:35.959779  177122 main.go:141] libmachine: (embed-certs-335807) DBG | skip adding static IP to network mk-embed-certs-335807 - found existing host DHCP lease matching {name: "embed-certs-335807", mac: "52:54:00:20:1b:c0", ip: "192.168.61.249"}
	I1213 00:08:35.959795  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Getting to WaitForSSH function...
	I1213 00:08:35.962033  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.962419  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:35.962448  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.962552  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Using SSH client type: external
	I1213 00:08:35.962575  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa (-rw-------)
	I1213 00:08:35.962608  177122 main.go:141] libmachine: (embed-certs-335807) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:08:35.962624  177122 main.go:141] libmachine: (embed-certs-335807) DBG | About to run SSH command:
	I1213 00:08:35.962637  177122 main.go:141] libmachine: (embed-certs-335807) DBG | exit 0
	I1213 00:08:36.056268  177122 main.go:141] libmachine: (embed-certs-335807) DBG | SSH cmd err, output: <nil>: 
	I1213 00:08:36.056649  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetConfigRaw
	I1213 00:08:36.057283  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:36.060244  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.060656  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.060705  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.060930  177122 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/config.json ...
	I1213 00:08:36.061132  177122 machine.go:88] provisioning docker machine ...
	I1213 00:08:36.061150  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:36.061386  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.061569  177122 buildroot.go:166] provisioning hostname "embed-certs-335807"
	I1213 00:08:36.061593  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.061737  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.063997  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.064352  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.064374  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.064532  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.064743  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.064899  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.065039  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.065186  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.065556  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.065575  177122 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-335807 && echo "embed-certs-335807" | sudo tee /etc/hostname
	I1213 00:08:36.199697  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-335807
	
	I1213 00:08:36.199733  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.202879  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.203289  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.203312  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.203495  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.203705  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.203845  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.203968  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.204141  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.204545  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.204564  177122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-335807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-335807/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-335807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:08:36.336285  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:08:36.336315  177122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:08:36.336337  177122 buildroot.go:174] setting up certificates
	I1213 00:08:36.336350  177122 provision.go:83] configureAuth start
	I1213 00:08:36.336364  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.336658  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:36.339327  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.339695  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.339727  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.339861  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.342106  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.342485  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.342506  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.342613  177122 provision.go:138] copyHostCerts
	I1213 00:08:36.342699  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:08:36.342711  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:08:36.342795  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:08:36.342910  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:08:36.342928  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:08:36.342962  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:08:36.343051  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:08:36.343061  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:08:36.343099  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:08:36.343185  177122 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.embed-certs-335807 san=[192.168.61.249 192.168.61.249 localhost 127.0.0.1 minikube embed-certs-335807]
	I1213 00:08:36.680595  177122 provision.go:172] copyRemoteCerts
	I1213 00:08:36.680687  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:08:36.680715  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.683411  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.683664  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.683690  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.683826  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.684044  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.684217  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.684370  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:36.773978  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:08:36.795530  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1213 00:08:36.817104  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 00:08:36.838510  177122 provision.go:86] duration metric: configureAuth took 502.141764ms
	I1213 00:08:36.838544  177122 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:08:36.838741  177122 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:08:36.838818  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.841372  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.841720  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.841759  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.841875  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.842095  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.842276  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.842447  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.842593  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.843043  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.843069  177122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:08:37.150317  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:08:37.150364  177122 machine.go:91] provisioned docker machine in 1.089215763s
	I1213 00:08:37.150378  177122 start.go:300] post-start starting for "embed-certs-335807" (driver="kvm2")
	I1213 00:08:37.150391  177122 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:08:37.150424  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.150800  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:08:37.150829  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.153552  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.153920  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.153958  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.154075  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.154268  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.154406  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.154562  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.245839  177122 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:08:37.249929  177122 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:08:37.249959  177122 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:08:37.250029  177122 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:08:37.250114  177122 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:08:37.250202  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:08:37.258062  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:08:37.280034  177122 start.go:303] post-start completed in 129.642247ms
	I1213 00:08:37.280060  177122 fix.go:56] fixHost completed within 23.934930358s
	I1213 00:08:37.280085  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.282572  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.282861  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.282903  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.283059  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.283333  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.283516  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.283694  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.283898  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:37.284217  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:37.284229  177122 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:08:37.405050  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426117.378231894
	
	I1213 00:08:37.405077  177122 fix.go:206] guest clock: 1702426117.378231894
	I1213 00:08:37.405099  177122 fix.go:219] Guest: 2023-12-13 00:08:37.378231894 +0000 UTC Remote: 2023-12-13 00:08:37.280064166 +0000 UTC m=+266.483341520 (delta=98.167728ms)
	I1213 00:08:37.405127  177122 fix.go:190] guest clock delta is within tolerance: 98.167728ms
	I1213 00:08:37.405137  177122 start.go:83] releasing machines lock for "embed-certs-335807", held for 24.060057368s
	I1213 00:08:37.405161  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.405417  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:37.408128  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.408513  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.408559  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.408681  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409264  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409449  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409542  177122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:08:37.409611  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.409647  177122 ssh_runner.go:195] Run: cat /version.json
	I1213 00:08:37.409673  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.412390  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412733  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.412764  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412795  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412910  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.413104  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.413187  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.413224  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.413263  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.413462  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.413455  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.413633  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.413758  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.413899  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.531948  177122 ssh_runner.go:195] Run: systemctl --version
	I1213 00:08:37.537555  177122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:08:37.677429  177122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:08:37.684043  177122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:08:37.684115  177122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:08:37.702304  177122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:08:37.702327  177122 start.go:475] detecting cgroup driver to use...
	I1213 00:08:37.702388  177122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:08:37.716601  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:08:37.728516  177122 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:08:37.728571  177122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:08:37.740595  177122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:08:37.753166  177122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:08:37.853095  177122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:08:37.970696  177122 docker.go:219] disabling docker service ...
	I1213 00:08:37.970769  177122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:08:37.983625  177122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:08:37.994924  177122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:08:38.110057  177122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:08:38.229587  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:08:38.243052  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:08:38.260480  177122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:08:38.260547  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.269442  177122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:08:38.269508  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.278569  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.287680  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.296798  177122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:08:38.306247  177122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:08:38.314189  177122 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:08:38.314251  177122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:08:38.326702  177122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:08:38.335111  177122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:08:38.435024  177122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:08:38.600232  177122 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:08:38.600322  177122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:08:38.606384  177122 start.go:543] Will wait 60s for crictl version
	I1213 00:08:38.606446  177122 ssh_runner.go:195] Run: which crictl
	I1213 00:08:38.611180  177122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:08:38.654091  177122 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:08:38.654197  177122 ssh_runner.go:195] Run: crio --version
	I1213 00:08:38.705615  177122 ssh_runner.go:195] Run: crio --version
	I1213 00:08:38.755387  177122 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1213 00:08:37.430037  177307 main.go:141] libmachine: (no-preload-143586) Calling .Start
	I1213 00:08:37.430266  177307 main.go:141] libmachine: (no-preload-143586) Ensuring networks are active...
	I1213 00:08:37.430931  177307 main.go:141] libmachine: (no-preload-143586) Ensuring network default is active
	I1213 00:08:37.431290  177307 main.go:141] libmachine: (no-preload-143586) Ensuring network mk-no-preload-143586 is active
	I1213 00:08:37.431640  177307 main.go:141] libmachine: (no-preload-143586) Getting domain xml...
	I1213 00:08:37.432281  177307 main.go:141] libmachine: (no-preload-143586) Creating domain...
	I1213 00:08:38.686491  177307 main.go:141] libmachine: (no-preload-143586) Waiting to get IP...
	I1213 00:08:38.687472  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:38.688010  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:38.688095  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:38.687986  178111 retry.go:31] will retry after 246.453996ms: waiting for machine to come up
	I1213 00:08:38.936453  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:38.936931  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:38.936963  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:38.936879  178111 retry.go:31] will retry after 317.431088ms: waiting for machine to come up
	I1213 00:08:39.256641  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:39.257217  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:39.257241  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:39.257165  178111 retry.go:31] will retry after 379.635912ms: waiting for machine to come up
	I1213 00:08:38.757019  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:38.760125  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:38.760684  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:38.760720  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:38.760949  177122 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1213 00:08:38.765450  177122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:08:38.778459  177122 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:08:38.778539  177122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:08:38.819215  177122 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1213 00:08:38.819281  177122 ssh_runner.go:195] Run: which lz4
	I1213 00:08:38.823481  177122 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:08:38.829034  177122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:08:38.829069  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1213 00:08:40.721922  177122 crio.go:444] Took 1.898469 seconds to copy over tarball
	I1213 00:08:40.721984  177122 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:08:39.638611  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:39.639108  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:39.639137  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:39.639067  178111 retry.go:31] will retry after 596.16391ms: waiting for machine to come up
	I1213 00:08:40.237504  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:40.237957  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:40.237990  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:40.237911  178111 retry.go:31] will retry after 761.995315ms: waiting for machine to come up
	I1213 00:08:41.002003  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:41.002388  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:41.002413  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:41.002329  178111 retry.go:31] will retry after 693.578882ms: waiting for machine to come up
	I1213 00:08:41.697126  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:41.697617  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:41.697652  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:41.697555  178111 retry.go:31] will retry after 1.050437275s: waiting for machine to come up
	I1213 00:08:42.749227  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:42.749833  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:42.749866  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:42.749782  178111 retry.go:31] will retry after 1.175916736s: waiting for machine to come up
	I1213 00:08:43.927564  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:43.928115  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:43.928144  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:43.928065  178111 retry.go:31] will retry after 1.590924957s: waiting for machine to come up
	I1213 00:08:43.767138  177122 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.045121634s)
	I1213 00:08:43.767169  177122 crio.go:451] Took 3.045224 seconds to extract the tarball
	I1213 00:08:43.767178  177122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:08:43.809047  177122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:08:43.873704  177122 crio.go:496] all images are preloaded for cri-o runtime.
	I1213 00:08:43.873726  177122 cache_images.go:84] Images are preloaded, skipping loading
	I1213 00:08:43.873792  177122 ssh_runner.go:195] Run: crio config
	I1213 00:08:43.941716  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:08:43.941747  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:08:43.941774  177122 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:08:43.941800  177122 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.249 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-335807 NodeName:embed-certs-335807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:08:43.942026  177122 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-335807"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:08:43.942123  177122 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-335807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-335807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:08:43.942201  177122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1213 00:08:43.951461  177122 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:08:43.951550  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:08:43.960491  177122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1213 00:08:43.976763  177122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:08:43.993725  177122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1213 00:08:44.010795  177122 ssh_runner.go:195] Run: grep 192.168.61.249	control-plane.minikube.internal$ /etc/hosts
	I1213 00:08:44.014668  177122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.249	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:08:44.027339  177122 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807 for IP: 192.168.61.249
	I1213 00:08:44.027376  177122 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:08:44.027550  177122 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:08:44.027617  177122 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:08:44.027701  177122 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/client.key
	I1213 00:08:44.027786  177122 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.key.ba34ddd8
	I1213 00:08:44.027844  177122 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.key
	I1213 00:08:44.027987  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:08:44.028035  177122 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:08:44.028056  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:08:44.028088  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:08:44.028129  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:08:44.028158  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:08:44.028220  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:08:44.029033  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:08:44.054023  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 00:08:44.078293  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:08:44.102083  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 00:08:44.126293  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:08:44.149409  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:08:44.172887  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:08:44.195662  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:08:44.218979  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:08:44.241598  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:08:44.265251  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:08:44.290073  177122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:08:44.306685  177122 ssh_runner.go:195] Run: openssl version
	I1213 00:08:44.312422  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:08:44.322405  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.327215  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.327296  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.333427  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:08:44.343574  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:08:44.353981  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.358997  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.359051  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.364654  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:08:44.375147  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:08:44.384900  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.389492  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.389553  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.395105  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:08:44.404656  177122 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:08:44.409852  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:08:44.415755  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:08:44.421911  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:08:44.428119  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:08:44.435646  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:08:44.441692  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:08:44.447849  177122 kubeadm.go:404] StartCluster: {Name:embed-certs-335807 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-335807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.249 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:08:44.447976  177122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:08:44.448025  177122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:08:44.495646  177122 cri.go:89] found id: ""
	I1213 00:08:44.495744  177122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:08:44.506405  177122 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:08:44.506454  177122 kubeadm.go:636] restartCluster start
	I1213 00:08:44.506515  177122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:08:44.516110  177122 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:44.517275  177122 kubeconfig.go:92] found "embed-certs-335807" server: "https://192.168.61.249:8443"
	I1213 00:08:44.519840  177122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:08:44.529214  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:44.529294  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:44.540415  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:44.540447  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:44.540497  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:44.552090  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.052810  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:45.052890  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:45.066300  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.552897  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:45.553031  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:45.564969  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.520191  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:45.520729  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:45.520754  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:45.520662  178111 retry.go:31] will retry after 1.407916355s: waiting for machine to come up
	I1213 00:08:46.930655  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:46.931073  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:46.931138  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:46.930993  178111 retry.go:31] will retry after 2.033169427s: waiting for machine to come up
	I1213 00:08:48.966888  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:48.967318  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:48.967351  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:48.967253  178111 retry.go:31] will retry after 2.277791781s: waiting for machine to come up
	I1213 00:08:46.052915  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:46.053025  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:46.068633  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:46.552208  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:46.552317  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:46.565045  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:47.052533  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:47.052627  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:47.068457  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:47.553040  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:47.553127  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:47.564657  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:48.052228  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:48.052322  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:48.068950  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:48.553171  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:48.553256  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:48.568868  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:49.052389  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:49.052515  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:49.064674  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:49.552894  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:49.553012  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:49.564302  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:50.052843  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:50.052941  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:50.064617  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:50.553231  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:50.553316  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:50.567944  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:51.247665  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:51.248141  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:51.248175  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:51.248098  178111 retry.go:31] will retry after 4.234068925s: waiting for machine to come up
	I1213 00:08:51.052574  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:51.052700  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:51.069491  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:51.553152  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:51.553234  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:51.565331  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:52.052984  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:52.053064  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:52.064748  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:52.552257  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:52.552362  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:52.563626  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:53.053196  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:53.053287  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:53.064273  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:53.552319  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:53.552423  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:53.563587  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:54.053227  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:54.053331  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:54.065636  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:54.530249  177122 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:08:54.530301  177122 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:08:54.530330  177122 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:08:54.530424  177122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:08:54.570200  177122 cri.go:89] found id: ""
	I1213 00:08:54.570275  177122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:08:54.586722  177122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:08:54.596240  177122 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:08:54.596313  177122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:08:54.605202  177122 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:08:54.605226  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:54.718619  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:55.483563  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:55.483985  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:55.484024  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:55.483927  178111 retry.go:31] will retry after 5.446962632s: waiting for machine to come up
	I1213 00:08:55.944250  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.225592219s)
	I1213 00:08:55.944282  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.132294  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.214859  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.297313  177122 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:08:56.297421  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:56.315946  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:56.830228  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:57.329695  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:57.830336  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.329610  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.829933  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.853978  177122 api_server.go:72] duration metric: took 2.556667404s to wait for apiserver process to appear ...
	I1213 00:08:58.854013  177122 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:08:58.854054  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:02.161624  177409 start.go:369] acquired machines lock for "default-k8s-diff-port-743278" in 4m22.024178516s
	I1213 00:09:02.161693  177409 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:09:02.161704  177409 fix.go:54] fixHost starting: 
	I1213 00:09:02.162127  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:02.162174  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:02.179045  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41749
	I1213 00:09:02.179554  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:02.180099  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:02.180131  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:02.180461  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:02.180658  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:02.180795  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:02.182459  177409 fix.go:102] recreateIfNeeded on default-k8s-diff-port-743278: state=Stopped err=<nil>
	I1213 00:09:02.182498  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	W1213 00:09:02.182657  177409 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:09:02.184934  177409 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-743278" ...
	I1213 00:09:00.933522  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.934020  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has current primary IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.934046  177307 main.go:141] libmachine: (no-preload-143586) Found IP for machine: 192.168.50.181
	I1213 00:09:00.934058  177307 main.go:141] libmachine: (no-preload-143586) Reserving static IP address...
	I1213 00:09:00.934538  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "no-preload-143586", mac: "52:54:00:4d:da:7b", ip: "192.168.50.181"} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:00.934573  177307 main.go:141] libmachine: (no-preload-143586) DBG | skip adding static IP to network mk-no-preload-143586 - found existing host DHCP lease matching {name: "no-preload-143586", mac: "52:54:00:4d:da:7b", ip: "192.168.50.181"}
	I1213 00:09:00.934592  177307 main.go:141] libmachine: (no-preload-143586) Reserved static IP address: 192.168.50.181
	I1213 00:09:00.934601  177307 main.go:141] libmachine: (no-preload-143586) Waiting for SSH to be available...
	I1213 00:09:00.934610  177307 main.go:141] libmachine: (no-preload-143586) DBG | Getting to WaitForSSH function...
	I1213 00:09:00.936830  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.937236  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:00.937283  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.937399  177307 main.go:141] libmachine: (no-preload-143586) DBG | Using SSH client type: external
	I1213 00:09:00.937421  177307 main.go:141] libmachine: (no-preload-143586) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa (-rw-------)
	I1213 00:09:00.937458  177307 main.go:141] libmachine: (no-preload-143586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:00.937473  177307 main.go:141] libmachine: (no-preload-143586) DBG | About to run SSH command:
	I1213 00:09:00.937485  177307 main.go:141] libmachine: (no-preload-143586) DBG | exit 0
	I1213 00:09:01.024658  177307 main.go:141] libmachine: (no-preload-143586) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:01.024996  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetConfigRaw
	I1213 00:09:01.025611  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:01.028062  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.028471  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.028509  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.028734  177307 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/config.json ...
	I1213 00:09:01.028955  177307 machine.go:88] provisioning docker machine ...
	I1213 00:09:01.028980  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:01.029193  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.029394  177307 buildroot.go:166] provisioning hostname "no-preload-143586"
	I1213 00:09:01.029409  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.029580  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.031949  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.032273  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.032305  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.032413  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.032599  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.032758  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.032877  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.033036  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.033377  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.033395  177307 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-143586 && echo "no-preload-143586" | sudo tee /etc/hostname
	I1213 00:09:01.157420  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-143586
	
	I1213 00:09:01.157461  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.160181  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.160498  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.160535  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.160654  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.160915  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.161104  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.161299  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.161469  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.161785  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.161811  177307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-143586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-143586/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-143586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:01.287746  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:01.287776  177307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:01.287835  177307 buildroot.go:174] setting up certificates
	I1213 00:09:01.287844  177307 provision.go:83] configureAuth start
	I1213 00:09:01.287857  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.288156  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:01.290754  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.291147  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.291179  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.291296  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.293643  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.294002  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.294034  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.294184  177307 provision.go:138] copyHostCerts
	I1213 00:09:01.294243  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:01.294256  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:01.294323  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:01.294441  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:01.294453  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:01.294489  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:01.294569  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:01.294578  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:01.294610  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:01.294683  177307 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.no-preload-143586 san=[192.168.50.181 192.168.50.181 localhost 127.0.0.1 minikube no-preload-143586]
	I1213 00:09:01.407742  177307 provision.go:172] copyRemoteCerts
	I1213 00:09:01.407823  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:01.407856  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.410836  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.411141  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.411170  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.411455  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.411698  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.411883  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.412056  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:01.501782  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:01.530009  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:01.555147  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1213 00:09:01.580479  177307 provision.go:86] duration metric: configureAuth took 292.598329ms
	I1213 00:09:01.580511  177307 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:01.580732  177307 config.go:182] Loaded profile config "no-preload-143586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:09:01.580835  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.583742  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.584241  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.584274  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.584581  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.584798  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.585004  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.585184  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.585429  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.585889  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.585928  177307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:01.909801  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:01.909855  177307 machine.go:91] provisioned docker machine in 880.876025ms
	I1213 00:09:01.909868  177307 start.go:300] post-start starting for "no-preload-143586" (driver="kvm2")
	I1213 00:09:01.909883  177307 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:01.909905  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:01.910311  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:01.910349  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.913247  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.913635  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.913669  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.913824  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.914044  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.914199  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.914349  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.005986  177307 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:02.011294  177307 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:02.011323  177307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:02.011403  177307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:02.011494  177307 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:02.011601  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:02.022942  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:02.044535  177307 start.go:303] post-start completed in 134.650228ms
	I1213 00:09:02.044569  177307 fix.go:56] fixHost completed within 24.639227496s
	I1213 00:09:02.044597  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.047115  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.047543  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.047573  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.047758  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.047986  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.048161  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.048340  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.048500  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:02.048803  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:02.048816  177307 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:02.161458  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426142.108795362
	
	I1213 00:09:02.161485  177307 fix.go:206] guest clock: 1702426142.108795362
	I1213 00:09:02.161496  177307 fix.go:219] Guest: 2023-12-13 00:09:02.108795362 +0000 UTC Remote: 2023-12-13 00:09:02.044573107 +0000 UTC m=+272.815740988 (delta=64.222255ms)
	I1213 00:09:02.161522  177307 fix.go:190] guest clock delta is within tolerance: 64.222255ms
	I1213 00:09:02.161529  177307 start.go:83] releasing machines lock for "no-preload-143586", held for 24.756225075s
	I1213 00:09:02.161560  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.161847  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:02.164980  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.165383  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.165406  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.165582  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166273  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166493  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166576  177307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:02.166621  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.166903  177307 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:02.166931  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.169526  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.169553  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.169895  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.169938  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.169978  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.170000  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.170183  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.170282  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.170344  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.170473  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.170480  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.170603  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.170653  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.170804  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.281372  177307 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:02.288798  177307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:02.432746  177307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:02.441453  177307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:02.441539  177307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:02.456484  177307 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:02.456512  177307 start.go:475] detecting cgroup driver to use...
	I1213 00:09:02.456578  177307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:02.473267  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:02.485137  177307 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:02.485226  177307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:02.497728  177307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:02.510592  177307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:02.657681  177307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:02.791382  177307 docker.go:219] disabling docker service ...
	I1213 00:09:02.791476  177307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:02.804977  177307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:02.817203  177307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:02.927181  177307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:03.037010  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:03.050235  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:03.068944  177307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:09:03.069048  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.078813  177307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:03.078975  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.089064  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.098790  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.109679  177307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:03.120686  177307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:03.128767  177307 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:03.128820  177307 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:03.141210  177307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:03.149602  177307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:03.254618  177307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:03.434005  177307 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:03.434097  177307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:03.440391  177307 start.go:543] Will wait 60s for crictl version
	I1213 00:09:03.440481  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:03.445570  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:03.492155  177307 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:03.492240  177307 ssh_runner.go:195] Run: crio --version
	I1213 00:09:03.549854  177307 ssh_runner.go:195] Run: crio --version
	I1213 00:09:03.605472  177307 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1213 00:09:03.606678  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:03.610326  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:03.610753  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:03.610789  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:03.611019  177307 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:03.616608  177307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:03.632258  177307 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1213 00:09:03.632317  177307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:03.672640  177307 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1213 00:09:03.672666  177307 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 00:09:03.672723  177307 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:03.672772  177307 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.672774  177307 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.672820  177307 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.673002  177307 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1213 00:09:03.673032  177307 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.673038  177307 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.673094  177307 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.674386  177307 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.674433  177307 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1213 00:09:03.674505  177307 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:03.674648  177307 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.674774  177307 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.674822  177307 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.674864  177307 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.675103  177307 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.808980  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.812271  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1213 00:09:03.827742  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.828695  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.831300  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.846041  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.850598  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.908323  177307 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1213 00:09:03.908378  177307 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.908458  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.122878  177307 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1213 00:09:04.122930  177307 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:04.122955  177307 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1213 00:09:04.123115  177307 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:04.123132  177307 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1213 00:09:04.123164  177307 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:04.122988  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123203  177307 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1213 00:09:04.123230  177307 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:04.123245  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:04.123267  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123065  177307 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1213 00:09:04.123304  177307 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:04.123311  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123338  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123201  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.135289  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:04.139046  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:04.206020  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:04.206025  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.206195  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.206291  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:04.206422  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:04.247875  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1213 00:09:04.248003  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:04.248126  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1213 00:09:04.248193  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:02.719708  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:02.719761  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:02.719779  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:02.780571  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:02.780621  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:03.281221  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:03.290375  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:03.290413  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:03.781510  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:03.788285  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:03.788314  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:04.280872  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:04.288043  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 200:
	ok
	I1213 00:09:04.299772  177122 api_server.go:141] control plane version: v1.28.4
	I1213 00:09:04.299808  177122 api_server.go:131] duration metric: took 5.445787793s to wait for apiserver health ...
	I1213 00:09:04.299821  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:09:04.299830  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:04.301759  177122 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:02.186420  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Start
	I1213 00:09:02.186584  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring networks are active...
	I1213 00:09:02.187464  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring network default is active
	I1213 00:09:02.187836  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring network mk-default-k8s-diff-port-743278 is active
	I1213 00:09:02.188238  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Getting domain xml...
	I1213 00:09:02.188979  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Creating domain...
	I1213 00:09:03.516121  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting to get IP...
	I1213 00:09:03.517461  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.518001  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.518058  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:03.517966  178294 retry.go:31] will retry after 198.440266ms: waiting for machine to come up
	I1213 00:09:03.718554  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.718808  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.718846  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:03.718804  178294 retry.go:31] will retry after 319.889216ms: waiting for machine to come up
	I1213 00:09:04.040334  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.040806  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.040956  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:04.040869  178294 retry.go:31] will retry after 465.804275ms: waiting for machine to come up
	I1213 00:09:04.508751  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.509133  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.509237  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:04.509181  178294 retry.go:31] will retry after 609.293222ms: waiting for machine to come up
	I1213 00:09:04.303497  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:04.332773  177122 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:04.373266  177122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:04.384737  177122 system_pods.go:59] 9 kube-system pods found
	I1213 00:09:04.384791  177122 system_pods.go:61] "coredns-5dd5756b68-5vm25" [83fb4b19-82a2-42eb-b4df-6fd838fb8848] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:04.384805  177122 system_pods.go:61] "coredns-5dd5756b68-6mfmr" [e9598d8f-e497-4725-8eca-7fe0e7c2c2f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:04.384820  177122 system_pods.go:61] "etcd-embed-certs-335807" [cf066481-3312-4fce-8e29-e00a0177f188] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:04.384833  177122 system_pods.go:61] "kube-apiserver-embed-certs-335807" [0a545be1-8bb8-425a-889e-5ee1293e0bdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:04.384847  177122 system_pods.go:61] "kube-controller-manager-embed-certs-335807" [fd7ec770-5008-46f9-9f41-122e56baf2e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:04.384862  177122 system_pods.go:61] "kube-proxy-k8n7r" [df8cefdc-7c91-40e6-8949-ba413fd75b28] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:04.384874  177122 system_pods.go:61] "kube-scheduler-embed-certs-335807" [d2431157-640c-49e6-a83d-37cac6be1c50] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:04.384883  177122 system_pods.go:61] "metrics-server-57f55c9bc5-fx5pd" [8aa6fc5a-5649-47b2-a7de-3cabfd1515a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:04.384899  177122 system_pods.go:61] "storage-provisioner" [02026bc0-4e03-4747-ad77-052f2911efe1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:04.384909  177122 system_pods.go:74] duration metric: took 11.614377ms to wait for pod list to return data ...
	I1213 00:09:04.384928  177122 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:04.389533  177122 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:04.389578  177122 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:04.389594  177122 node_conditions.go:105] duration metric: took 4.657548ms to run NodePressure ...
	I1213 00:09:04.389622  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:04.771105  177122 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:04.778853  177122 kubeadm.go:787] kubelet initialised
	I1213 00:09:04.778886  177122 kubeadm.go:788] duration metric: took 7.74816ms waiting for restarted kubelet to initialise ...
	I1213 00:09:04.778898  177122 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:04.795344  177122 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:04.323893  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1213 00:09:04.323901  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1213 00:09:04.324122  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.324168  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.324006  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:04.324031  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:04.324300  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:04.324336  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:04.324067  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1213 00:09:04.324096  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1213 00:09:04.324100  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:04.597566  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:07.626684  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.302476413s)
	I1213 00:09:07.626718  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1213 00:09:07.626754  177307 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:07.626784  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (3.302394961s)
	I1213 00:09:07.626821  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1213 00:09:07.626824  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.302508593s)
	I1213 00:09:07.626859  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1213 00:09:07.626833  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:07.626882  177307 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.029282623s)
	I1213 00:09:07.626755  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.302393062s)
	I1213 00:09:07.626939  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1213 00:09:07.626975  177307 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1213 00:09:07.627010  177307 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:07.627072  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:05.120691  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.121251  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.121285  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:05.121183  178294 retry.go:31] will retry after 488.195845ms: waiting for machine to come up
	I1213 00:09:05.610815  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.611226  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.611258  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:05.611167  178294 retry.go:31] will retry after 705.048097ms: waiting for machine to come up
	I1213 00:09:06.317891  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:06.318329  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:06.318353  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:06.318278  178294 retry.go:31] will retry after 788.420461ms: waiting for machine to come up
	I1213 00:09:07.108179  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:07.108736  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:07.108769  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:07.108696  178294 retry.go:31] will retry after 1.331926651s: waiting for machine to come up
	I1213 00:09:08.442609  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:08.443087  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:08.443114  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:08.443032  178294 retry.go:31] will retry after 1.180541408s: waiting for machine to come up
	I1213 00:09:09.625170  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:09.625722  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:09.625753  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:09.625653  178294 retry.go:31] will retry after 1.866699827s: waiting for machine to come up
	I1213 00:09:06.828008  177122 pod_ready.go:102] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:09.322889  177122 pod_ready.go:102] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:09.822883  177122 pod_ready.go:92] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:09.822913  177122 pod_ready.go:81] duration metric: took 5.027534973s waiting for pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.822927  177122 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.828990  177122 pod_ready.go:92] pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:09.829018  177122 pod_ready.go:81] duration metric: took 6.083345ms waiting for pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.829035  177122 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.803403  177307 ssh_runner.go:235] Completed: which crictl: (2.176302329s)
	I1213 00:09:09.803541  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:09.803468  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.176578633s)
	I1213 00:09:09.803602  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1213 00:09:09.803634  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:09.803673  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:09.851557  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 00:09:09.851690  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:12.107222  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.303514888s)
	I1213 00:09:12.107284  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1213 00:09:12.107292  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.255575693s)
	I1213 00:09:12.107308  177307 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:12.107336  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1213 00:09:12.107363  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:11.494563  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:11.495148  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:11.495182  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:11.495076  178294 retry.go:31] will retry after 2.859065831s: waiting for machine to come up
	I1213 00:09:14.356328  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:14.356778  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:14.356814  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:14.356719  178294 retry.go:31] will retry after 3.506628886s: waiting for machine to come up
	I1213 00:09:11.849447  177122 pod_ready.go:102] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:14.349299  177122 pod_ready.go:102] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:14.853963  177122 pod_ready.go:92] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:14.853989  177122 pod_ready.go:81] duration metric: took 5.024945989s waiting for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.854001  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.861663  177122 pod_ready.go:92] pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:14.861685  177122 pod_ready.go:81] duration metric: took 7.676036ms waiting for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.861697  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:16.223090  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.115697846s)
	I1213 00:09:16.223134  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1213 00:09:16.223165  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:16.223211  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:17.473407  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.25017316s)
	I1213 00:09:17.473435  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1213 00:09:17.473476  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:17.473552  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:17.864739  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:17.865213  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:17.865237  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:17.865171  178294 retry.go:31] will retry after 2.94932643s: waiting for machine to come up
	I1213 00:09:16.884215  177122 pod_ready.go:102] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:17.383872  177122 pod_ready.go:92] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.383906  177122 pod_ready.go:81] duration metric: took 2.52219538s waiting for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.383928  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k8n7r" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.389464  177122 pod_ready.go:92] pod "kube-proxy-k8n7r" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.389482  177122 pod_ready.go:81] duration metric: took 5.547172ms waiting for pod "kube-proxy-k8n7r" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.389490  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.419020  177122 pod_ready.go:92] pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.419047  177122 pod_ready.go:81] duration metric: took 29.549704ms waiting for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.419056  177122 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:19.730210  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:22.069281  176813 start.go:369] acquired machines lock for "old-k8s-version-508612" in 1m3.72259979s
	I1213 00:09:22.069359  176813 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:09:22.069367  176813 fix.go:54] fixHost starting: 
	I1213 00:09:22.069812  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:22.069851  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:22.088760  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I1213 00:09:22.089211  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:22.089766  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:09:22.089795  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:22.090197  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:22.090396  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:22.090574  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:09:22.092039  176813 fix.go:102] recreateIfNeeded on old-k8s-version-508612: state=Stopped err=<nil>
	I1213 00:09:22.092064  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	W1213 00:09:22.092241  176813 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:09:22.094310  176813 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-508612" ...
	I1213 00:09:20.817420  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.817794  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has current primary IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.817833  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Found IP for machine: 192.168.72.144
	I1213 00:09:20.817870  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Reserving static IP address...
	I1213 00:09:20.818250  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-743278", mac: "52:54:00:d1:a8:22", ip: "192.168.72.144"} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.818272  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Reserved static IP address: 192.168.72.144
	I1213 00:09:20.818286  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | skip adding static IP to network mk-default-k8s-diff-port-743278 - found existing host DHCP lease matching {name: "default-k8s-diff-port-743278", mac: "52:54:00:d1:a8:22", ip: "192.168.72.144"}
	I1213 00:09:20.818298  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Getting to WaitForSSH function...
	I1213 00:09:20.818312  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for SSH to be available...
	I1213 00:09:20.820093  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.820378  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.820409  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.820525  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Using SSH client type: external
	I1213 00:09:20.820552  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa (-rw-------)
	I1213 00:09:20.820587  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:20.820618  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | About to run SSH command:
	I1213 00:09:20.820632  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | exit 0
	I1213 00:09:20.907942  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:20.908280  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetConfigRaw
	I1213 00:09:20.909042  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:20.911222  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.911544  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.911569  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.911826  177409 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/config.json ...
	I1213 00:09:20.912048  177409 machine.go:88] provisioning docker machine ...
	I1213 00:09:20.912071  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:20.912284  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:20.912425  177409 buildroot.go:166] provisioning hostname "default-k8s-diff-port-743278"
	I1213 00:09:20.912460  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:20.912585  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:20.914727  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.915081  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.915113  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.915257  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:20.915449  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:20.915562  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:20.915671  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:20.915842  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:20.916275  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:20.916293  177409 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-743278 && echo "default-k8s-diff-port-743278" | sudo tee /etc/hostname
	I1213 00:09:21.042561  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-743278
	
	I1213 00:09:21.042606  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.045461  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.045809  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.045851  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.045957  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.046181  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.046350  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.046508  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.046685  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.047008  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.047034  177409 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-743278' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-743278/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-743278' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:21.169124  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:21.169155  177409 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:21.169175  177409 buildroot.go:174] setting up certificates
	I1213 00:09:21.169185  177409 provision.go:83] configureAuth start
	I1213 00:09:21.169194  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:21.169502  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:21.172929  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.173329  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.173361  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.173540  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.175847  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.176249  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.176277  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.176447  177409 provision.go:138] copyHostCerts
	I1213 00:09:21.176509  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:21.176525  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:21.176584  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:21.176677  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:21.176744  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:21.176775  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:21.176841  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:21.176848  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:21.176866  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:21.176922  177409 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-743278 san=[192.168.72.144 192.168.72.144 localhost 127.0.0.1 minikube default-k8s-diff-port-743278]
	I1213 00:09:21.314924  177409 provision.go:172] copyRemoteCerts
	I1213 00:09:21.315003  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:21.315032  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.318149  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.318550  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.318582  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.318787  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.319005  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.319191  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.319346  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:21.409699  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:21.438626  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1213 00:09:21.468607  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:21.495376  177409 provision.go:86] duration metric: configureAuth took 326.171872ms
	I1213 00:09:21.495403  177409 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:21.495621  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:09:21.495700  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.498778  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.499247  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.499279  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.499495  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.499710  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.499877  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.500098  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.500285  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.500728  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.500751  177409 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:21.822577  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:21.822606  177409 machine.go:91] provisioned docker machine in 910.541774ms
	I1213 00:09:21.822619  177409 start.go:300] post-start starting for "default-k8s-diff-port-743278" (driver="kvm2")
	I1213 00:09:21.822632  177409 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:21.822659  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:21.823015  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:21.823044  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.825948  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.826367  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.826403  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.826577  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.826789  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.826965  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.827146  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:21.915743  177409 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:21.920142  177409 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:21.920169  177409 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:21.920249  177409 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:21.920343  177409 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:21.920474  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:21.929896  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:21.951854  177409 start.go:303] post-start completed in 129.217251ms
	I1213 00:09:21.951880  177409 fix.go:56] fixHost completed within 19.790175647s
	I1213 00:09:21.951904  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.954710  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.955103  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.955137  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.955352  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.955533  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.955685  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.955814  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.955980  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.956485  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.956505  177409 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:22.069059  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426162.011062386
	
	I1213 00:09:22.069089  177409 fix.go:206] guest clock: 1702426162.011062386
	I1213 00:09:22.069100  177409 fix.go:219] Guest: 2023-12-13 00:09:22.011062386 +0000 UTC Remote: 2023-12-13 00:09:21.951884769 +0000 UTC m=+281.971624237 (delta=59.177617ms)
	I1213 00:09:22.069142  177409 fix.go:190] guest clock delta is within tolerance: 59.177617ms
	I1213 00:09:22.069153  177409 start.go:83] releasing machines lock for "default-k8s-diff-port-743278", held for 19.907486915s
	I1213 00:09:22.069191  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.069478  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:22.072371  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.072761  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.072794  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.072922  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073441  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073605  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073670  177409 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:22.073719  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:22.073821  177409 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:22.073841  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:22.076233  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076550  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076703  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.076733  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076874  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:22.077050  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.077080  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.077052  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:22.077227  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:22.077303  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:22.077540  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:22.077630  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:22.077714  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:22.077851  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:22.188131  177409 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:22.193896  177409 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:22.339227  177409 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:22.346292  177409 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:22.346366  177409 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:22.361333  177409 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:22.361364  177409 start.go:475] detecting cgroup driver to use...
	I1213 00:09:22.361438  177409 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:22.374698  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:22.387838  177409 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:22.387897  177409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:22.402969  177409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:22.417038  177409 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:22.533130  177409 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:22.665617  177409 docker.go:219] disabling docker service ...
	I1213 00:09:22.665690  177409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:22.681327  177409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:22.692842  177409 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:22.816253  177409 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:22.951988  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:22.967607  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:22.985092  177409 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:09:22.985158  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:22.994350  177409 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:22.994403  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.003372  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.012176  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.021215  177409 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:23.031105  177409 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:23.039486  177409 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:23.039552  177409 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:23.053085  177409 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:23.062148  177409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:23.182275  177409 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:23.357901  177409 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:23.357991  177409 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:23.364148  177409 start.go:543] Will wait 60s for crictl version
	I1213 00:09:23.364225  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:09:23.368731  177409 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:23.408194  177409 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:23.408288  177409 ssh_runner.go:195] Run: crio --version
	I1213 00:09:23.461483  177409 ssh_runner.go:195] Run: crio --version
	I1213 00:09:23.513553  177409 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1213 00:09:20.148999  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.675412499s)
	I1213 00:09:20.149037  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1213 00:09:20.149073  177307 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:20.149116  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:21.101559  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 00:09:21.101608  177307 cache_images.go:123] Successfully loaded all cached images
	I1213 00:09:21.101615  177307 cache_images.go:92] LoadImages completed in 17.428934706s
	I1213 00:09:21.101694  177307 ssh_runner.go:195] Run: crio config
	I1213 00:09:21.159955  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:09:21.159978  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:21.159999  177307 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:21.160023  177307 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.181 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-143586 NodeName:no-preload-143586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:09:21.160198  177307 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-143586"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:21.160303  177307 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-143586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-143586 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:09:21.160378  177307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1213 00:09:21.170615  177307 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:21.170701  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:21.180228  177307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1213 00:09:21.198579  177307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1213 00:09:21.215096  177307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1213 00:09:21.233288  177307 ssh_runner.go:195] Run: grep 192.168.50.181	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:21.236666  177307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:21.248811  177307 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586 for IP: 192.168.50.181
	I1213 00:09:21.248847  177307 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:21.249007  177307 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:21.249058  177307 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:21.249154  177307 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.key
	I1213 00:09:21.249238  177307 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.key.8f5c2e66
	I1213 00:09:21.249291  177307 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.key
	I1213 00:09:21.249427  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:21.249468  177307 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:21.249484  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:21.249523  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:21.249559  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:21.249591  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:21.249642  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:21.250517  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:21.276697  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:21.299356  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:21.322849  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:21.348145  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:21.370885  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:21.393257  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:21.418643  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:21.446333  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:21.476374  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:21.506662  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:21.530653  177307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:21.555129  177307 ssh_runner.go:195] Run: openssl version
	I1213 00:09:21.561174  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:21.571372  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.575988  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.576053  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.581633  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:21.590564  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:21.599910  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.604113  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.604160  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.609303  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:21.619194  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:21.628171  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.632419  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.632494  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.638310  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:21.648369  177307 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:21.653143  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:21.659543  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:21.665393  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:21.670855  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:21.676290  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:21.681864  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:21.688162  177307 kubeadm.go:404] StartCluster: {Name:no-preload-143586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-143586 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:21.688243  177307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:21.688280  177307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:21.727451  177307 cri.go:89] found id: ""
	I1213 00:09:21.727536  177307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:21.739044  177307 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:21.739066  177307 kubeadm.go:636] restartCluster start
	I1213 00:09:21.739124  177307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:21.747328  177307 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:21.748532  177307 kubeconfig.go:92] found "no-preload-143586" server: "https://192.168.50.181:8443"
	I1213 00:09:21.751029  177307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:21.759501  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:21.759546  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:21.771029  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:21.771048  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:21.771095  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:21.782184  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:22.282507  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:22.282588  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:22.294105  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:22.783207  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:22.783266  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:22.796776  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.282325  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:23.282395  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:23.295974  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.782516  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:23.782615  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:23.797912  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.514911  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:23.517973  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:23.518335  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:23.518366  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:23.518566  177409 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:23.523522  177409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:23.537195  177409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:09:23.537261  177409 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:23.579653  177409 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1213 00:09:23.579729  177409 ssh_runner.go:195] Run: which lz4
	I1213 00:09:23.583956  177409 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:09:23.588686  177409 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:09:23.588720  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1213 00:09:22.095647  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Start
	I1213 00:09:22.095821  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring networks are active...
	I1213 00:09:22.096548  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring network default is active
	I1213 00:09:22.096936  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring network mk-old-k8s-version-508612 is active
	I1213 00:09:22.097366  176813 main.go:141] libmachine: (old-k8s-version-508612) Getting domain xml...
	I1213 00:09:22.097939  176813 main.go:141] libmachine: (old-k8s-version-508612) Creating domain...
	I1213 00:09:23.423128  176813 main.go:141] libmachine: (old-k8s-version-508612) Waiting to get IP...
	I1213 00:09:23.424090  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:23.424606  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:23.424676  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:23.424588  178471 retry.go:31] will retry after 260.416347ms: waiting for machine to come up
	I1213 00:09:23.687268  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:23.687867  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:23.687902  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:23.687814  178471 retry.go:31] will retry after 377.709663ms: waiting for machine to come up
	I1213 00:09:24.067588  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.068249  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.068277  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.068177  178471 retry.go:31] will retry after 480.876363ms: waiting for machine to come up
	I1213 00:09:24.550715  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.551244  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.551278  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.551191  178471 retry.go:31] will retry after 389.885819ms: waiting for machine to come up
	I1213 00:09:24.942898  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.943495  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.943526  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.943443  178471 retry.go:31] will retry after 532.578432ms: waiting for machine to come up
	I1213 00:09:25.478278  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:25.478810  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:25.478845  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:25.478781  178471 retry.go:31] will retry after 599.649827ms: waiting for machine to come up
	I1213 00:09:22.230086  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:24.729105  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:24.282598  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:24.282708  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:24.298151  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:24.782530  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:24.782639  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:24.798661  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.283235  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:25.283393  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:25.297662  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.783319  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:25.783436  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:25.797129  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.282666  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:26.282789  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:26.295674  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.783065  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:26.783147  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:26.794192  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:27.282703  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:27.282775  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:27.294823  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:27.782891  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:27.782975  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:27.798440  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:28.282826  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:28.282909  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:28.293752  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:28.782264  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:28.782325  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:28.793986  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.524765  177409 crio.go:444] Took 1.940853 seconds to copy over tarball
	I1213 00:09:25.524843  177409 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:09:28.426493  177409 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.901618536s)
	I1213 00:09:28.426522  177409 crio.go:451] Took 2.901730 seconds to extract the tarball
	I1213 00:09:28.426533  177409 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:09:28.467170  177409 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:28.520539  177409 crio.go:496] all images are preloaded for cri-o runtime.
	I1213 00:09:28.520567  177409 cache_images.go:84] Images are preloaded, skipping loading
	I1213 00:09:28.520654  177409 ssh_runner.go:195] Run: crio config
	I1213 00:09:28.588320  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:09:28.588348  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:28.588370  177409 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:28.588395  177409 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-743278 NodeName:default-k8s-diff-port-743278 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:09:28.588593  177409 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-743278"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:28.588687  177409 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-743278 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1213 00:09:28.588755  177409 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1213 00:09:28.597912  177409 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:28.597987  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:28.608324  177409 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1213 00:09:28.627102  177409 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:09:28.646837  177409 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1213 00:09:28.664534  177409 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:28.668580  177409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:28.680736  177409 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278 for IP: 192.168.72.144
	I1213 00:09:28.680777  177409 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:28.680971  177409 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:28.681037  177409 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:28.681140  177409 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.key
	I1213 00:09:28.681234  177409 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.key.1dd7f3f2
	I1213 00:09:28.681301  177409 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.key
	I1213 00:09:28.681480  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:28.681525  177409 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:28.681543  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:28.681587  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:28.681636  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:28.681681  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:28.681743  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:28.682710  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:28.707852  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:28.732792  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:28.755545  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:28.779880  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:28.805502  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:28.829894  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:28.853211  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:28.877291  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:28.899870  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:28.922141  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:28.945634  177409 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:28.962737  177409 ssh_runner.go:195] Run: openssl version
	I1213 00:09:28.968869  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:28.980535  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.985219  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.985284  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.990911  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:29.001595  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:29.012408  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.017644  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.017760  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.023914  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:29.034793  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:29.045825  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.050538  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.050584  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.057322  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:29.067993  177409 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:29.072782  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:29.078806  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:29.084744  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:29.090539  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:29.096734  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:29.102729  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:29.108909  177409 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:29.109022  177409 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:29.109095  177409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:29.158003  177409 cri.go:89] found id: ""
	I1213 00:09:29.158100  177409 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:29.169464  177409 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:29.169500  177409 kubeadm.go:636] restartCluster start
	I1213 00:09:29.169555  177409 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:29.180347  177409 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.181609  177409 kubeconfig.go:92] found "default-k8s-diff-port-743278" server: "https://192.168.72.144:8444"
	I1213 00:09:29.184377  177409 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:29.193593  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.193645  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.205447  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.205465  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.205519  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.221169  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.721729  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.721835  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.735942  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.080407  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:26.081034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:26.081061  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:26.080973  178471 retry.go:31] will retry after 1.103545811s: waiting for machine to come up
	I1213 00:09:27.186673  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:27.187208  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:27.187241  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:27.187152  178471 retry.go:31] will retry after 977.151221ms: waiting for machine to come up
	I1213 00:09:28.165799  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:28.166219  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:28.166257  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:28.166166  178471 retry.go:31] will retry after 1.27451971s: waiting for machine to come up
	I1213 00:09:29.441683  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:29.442203  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:29.442240  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:29.442122  178471 retry.go:31] will retry after 1.620883976s: waiting for machine to come up
	I1213 00:09:26.733297  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:29.624623  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:29.282975  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.621544  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.632749  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.783112  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.783214  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.794919  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.282457  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.282528  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.293852  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.782400  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.782499  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.797736  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.282276  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.282367  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.298115  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.759957  177307 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:09:31.760001  177307 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:09:31.760013  177307 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:09:31.760078  177307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:31.799045  177307 cri.go:89] found id: ""
	I1213 00:09:31.799146  177307 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:09:31.813876  177307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:09:31.823305  177307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:09:31.823382  177307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:31.831741  177307 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:31.831767  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:31.961871  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:32.826330  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.045107  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.119065  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.187783  177307 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:09:33.187887  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:33.217142  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:33.735695  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:34.236063  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:30.221906  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.230723  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.243849  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.721380  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.721492  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.734401  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.222026  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.222150  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.235400  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.722107  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.722189  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.735415  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:32.222216  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:32.222365  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:32.238718  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:32.721270  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:32.721389  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:32.735677  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:33.222261  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:33.222329  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:33.243918  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:33.721349  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:33.721438  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:33.738138  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:34.221645  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:34.221748  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:34.238845  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:34.721320  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:34.721390  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:34.738271  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.065065  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:31.065494  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:31.065528  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:31.065436  178471 retry.go:31] will retry after 2.452686957s: waiting for machine to come up
	I1213 00:09:33.519937  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:33.520505  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:33.520537  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:33.520468  178471 retry.go:31] will retry after 2.830999713s: waiting for machine to come up
	I1213 00:09:31.729101  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:34.229143  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:34.735218  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.235570  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.736120  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.764916  177307 api_server.go:72] duration metric: took 2.577131698s to wait for apiserver process to appear ...
	I1213 00:09:35.764942  177307 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:09:35.764971  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.765820  177307 api_server.go:269] stopped: https://192.168.50.181:8443/healthz: Get "https://192.168.50.181:8443/healthz": dial tcp 192.168.50.181:8443: connect: connection refused
	I1213 00:09:35.765860  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.766257  177307 api_server.go:269] stopped: https://192.168.50.181:8443/healthz: Get "https://192.168.50.181:8443/healthz": dial tcp 192.168.50.181:8443: connect: connection refused
	I1213 00:09:36.266842  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.221935  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:35.222069  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:35.240609  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:35.721801  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:35.721965  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:35.765295  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:36.221944  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:36.222021  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:36.238211  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:36.721750  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:36.721830  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:36.736765  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:37.221936  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:37.222185  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:37.238002  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:37.721304  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:37.721385  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:37.734166  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:38.221603  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:38.221701  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:38.235174  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:38.721704  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:38.721794  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:38.735963  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:39.193664  177409 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:09:39.193713  177409 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:09:39.193727  177409 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:09:39.193787  177409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:39.238262  177409 cri.go:89] found id: ""
	I1213 00:09:39.238336  177409 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:09:39.258625  177409 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:09:39.271127  177409 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:09:39.271196  177409 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:39.280870  177409 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:39.280906  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:39.399746  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:36.353967  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:36.354453  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:36.354481  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:36.354415  178471 retry.go:31] will retry after 2.983154328s: waiting for machine to come up
	I1213 00:09:39.341034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:39.341497  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:39.341526  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:39.341462  178471 retry.go:31] will retry after 3.436025657s: waiting for machine to come up
	I1213 00:09:36.230811  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:38.729730  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:40.732654  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:39.693843  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:39.693877  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:39.693896  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:39.767118  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:39.767153  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:39.767169  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:39.787684  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:39.787725  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:40.267069  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:40.272416  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:40.272464  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:40.766651  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:40.799906  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:40.799942  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:41.266411  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:41.271259  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 200:
	ok
	I1213 00:09:41.278691  177307 api_server.go:141] control plane version: v1.29.0-rc.2
	I1213 00:09:41.278715  177307 api_server.go:131] duration metric: took 5.51376527s to wait for apiserver health ...
	I1213 00:09:41.278725  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:09:41.278732  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:41.280473  177307 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:41.281924  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:41.308598  177307 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:41.330367  177307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:41.342017  177307 system_pods.go:59] 8 kube-system pods found
	I1213 00:09:41.342048  177307 system_pods.go:61] "coredns-76f75df574-87nc6" [829c7a44-85a0-4ed0-b98a-b5016aa04b97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:41.342054  177307 system_pods.go:61] "etcd-no-preload-143586" [b50e57af-530a-4689-bf42-a9f74fa6bea1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:41.342065  177307 system_pods.go:61] "kube-apiserver-no-preload-143586" [3aed4b84-e029-433a-8394-f99608b52edd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:41.342071  177307 system_pods.go:61] "kube-controller-manager-no-preload-143586" [f88e182a-0a81-4c7b-b2b3-d6911baf340f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:41.342080  177307 system_pods.go:61] "kube-proxy-8k9x6" [a71d2257-2012-4d0d-948d-d69c0c04bd2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:41.342086  177307 system_pods.go:61] "kube-scheduler-no-preload-143586" [dfb7b176-fbf8-4542-890f-1eba0f699b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:41.342098  177307 system_pods.go:61] "metrics-server-57f55c9bc5-px5lm" [25b8b500-0ad0-4da3-8f7f-d8c46a848e8a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:41.342106  177307 system_pods.go:61] "storage-provisioner" [bb18a95a-ed99-43f7-bc6f-333e0b86cacc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:41.342114  177307 system_pods.go:74] duration metric: took 11.726461ms to wait for pod list to return data ...
	I1213 00:09:41.342132  177307 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:41.345985  177307 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:41.346011  177307 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:41.346021  177307 node_conditions.go:105] duration metric: took 3.884209ms to run NodePressure ...
	I1213 00:09:41.346038  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:41.682789  177307 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:41.690867  177307 kubeadm.go:787] kubelet initialised
	I1213 00:09:41.690892  177307 kubeadm.go:788] duration metric: took 8.076203ms waiting for restarted kubelet to initialise ...
	I1213 00:09:41.690902  177307 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:41.698622  177307 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-87nc6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:43.720619  177307 pod_ready.go:102] pod "coredns-76f75df574-87nc6" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:40.471390  177409 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.071602244s)
	I1213 00:09:40.471425  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.665738  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.786290  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.859198  177409 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:09:40.859302  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:40.887488  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:41.406398  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:41.906653  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:42.405784  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:42.906462  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:43.406489  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:43.432933  177409 api_server.go:72] duration metric: took 2.573735322s to wait for apiserver process to appear ...
	I1213 00:09:43.432975  177409 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:09:43.432997  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:43.433588  177409 api_server.go:269] stopped: https://192.168.72.144:8444/healthz: Get "https://192.168.72.144:8444/healthz": dial tcp 192.168.72.144:8444: connect: connection refused
	I1213 00:09:43.433641  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:43.434089  177409 api_server.go:269] stopped: https://192.168.72.144:8444/healthz: Get "https://192.168.72.144:8444/healthz": dial tcp 192.168.72.144:8444: connect: connection refused
	I1213 00:09:43.934469  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:42.779498  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.779971  176813 main.go:141] libmachine: (old-k8s-version-508612) Found IP for machine: 192.168.39.70
	I1213 00:09:42.779993  176813 main.go:141] libmachine: (old-k8s-version-508612) Reserving static IP address...
	I1213 00:09:42.780011  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has current primary IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.780466  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "old-k8s-version-508612", mac: "52:54:00:dd:da:91", ip: "192.168.39.70"} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.780504  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | skip adding static IP to network mk-old-k8s-version-508612 - found existing host DHCP lease matching {name: "old-k8s-version-508612", mac: "52:54:00:dd:da:91", ip: "192.168.39.70"}
	I1213 00:09:42.780524  176813 main.go:141] libmachine: (old-k8s-version-508612) Reserved static IP address: 192.168.39.70
	I1213 00:09:42.780547  176813 main.go:141] libmachine: (old-k8s-version-508612) Waiting for SSH to be available...
	I1213 00:09:42.780559  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Getting to WaitForSSH function...
	I1213 00:09:42.783019  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.783434  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.783482  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.783566  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Using SSH client type: external
	I1213 00:09:42.783598  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa (-rw-------)
	I1213 00:09:42.783638  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:42.783661  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | About to run SSH command:
	I1213 00:09:42.783681  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | exit 0
	I1213 00:09:42.885148  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:42.885690  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetConfigRaw
	I1213 00:09:42.886388  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:42.889440  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.889898  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.889937  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.890209  176813 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/config.json ...
	I1213 00:09:42.890423  176813 machine.go:88] provisioning docker machine ...
	I1213 00:09:42.890444  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:42.890685  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:42.890874  176813 buildroot.go:166] provisioning hostname "old-k8s-version-508612"
	I1213 00:09:42.890899  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:42.891039  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:42.893678  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.894021  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.894051  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.894174  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:42.894391  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:42.894556  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:42.894720  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:42.894909  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:42.895383  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:42.895406  176813 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-508612 && echo "old-k8s-version-508612" | sudo tee /etc/hostname
	I1213 00:09:43.045290  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-508612
	
	I1213 00:09:43.045345  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.048936  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.049438  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.049476  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.049662  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.049877  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.050074  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.050231  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.050413  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:43.050888  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:43.050919  176813 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-508612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-508612/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-508612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:43.183021  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:43.183061  176813 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:43.183089  176813 buildroot.go:174] setting up certificates
	I1213 00:09:43.183102  176813 provision.go:83] configureAuth start
	I1213 00:09:43.183115  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:43.183467  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:43.186936  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.187409  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.187441  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.187620  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.190125  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.190560  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.190612  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.190775  176813 provision.go:138] copyHostCerts
	I1213 00:09:43.190842  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:43.190861  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:43.190936  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:43.191113  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:43.191126  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:43.191158  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:43.191245  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:43.191256  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:43.191284  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:43.191354  176813 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-508612 san=[192.168.39.70 192.168.39.70 localhost 127.0.0.1 minikube old-k8s-version-508612]
	I1213 00:09:43.321927  176813 provision.go:172] copyRemoteCerts
	I1213 00:09:43.321999  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:43.322039  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.325261  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.325653  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.325686  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.325920  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.326128  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.326300  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.326474  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:43.420656  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:43.445997  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 00:09:43.471466  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:43.500104  176813 provision.go:86] duration metric: configureAuth took 316.983913ms
	I1213 00:09:43.500137  176813 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:43.500380  176813 config.go:182] Loaded profile config "old-k8s-version-508612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1213 00:09:43.500554  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.503567  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.503994  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.504034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.504320  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.504551  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.504797  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.504978  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.505164  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:43.505640  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:43.505668  176813 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:43.859639  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:43.859723  176813 machine.go:91] provisioned docker machine in 969.28446ms
	I1213 00:09:43.859741  176813 start.go:300] post-start starting for "old-k8s-version-508612" (driver="kvm2")
	I1213 00:09:43.859754  176813 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:43.859781  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:43.860174  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:43.860207  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.863407  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.863903  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.863944  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.864142  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.864340  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.864604  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.864907  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:43.957616  176813 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:43.963381  176813 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:43.963413  176813 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:43.963489  176813 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:43.963594  176813 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:43.963710  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:43.972902  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:44.001469  176813 start.go:303] post-start completed in 141.706486ms
	I1213 00:09:44.001503  176813 fix.go:56] fixHost completed within 21.932134773s
	I1213 00:09:44.001532  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.004923  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.005334  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.005410  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.005545  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.005846  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.006067  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.006198  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.006401  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:44.006815  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:44.006841  176813 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:44.134363  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426184.079167065
	
	I1213 00:09:44.134389  176813 fix.go:206] guest clock: 1702426184.079167065
	I1213 00:09:44.134398  176813 fix.go:219] Guest: 2023-12-13 00:09:44.079167065 +0000 UTC Remote: 2023-12-13 00:09:44.001508908 +0000 UTC m=+368.244893563 (delta=77.658157ms)
	I1213 00:09:44.134434  176813 fix.go:190] guest clock delta is within tolerance: 77.658157ms
	I1213 00:09:44.134446  176813 start.go:83] releasing machines lock for "old-k8s-version-508612", held for 22.06510734s
	I1213 00:09:44.134469  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.134760  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:44.137820  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.138245  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.138275  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.138444  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.138957  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.139152  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.139229  176813 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:44.139300  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.139358  176813 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:44.139383  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.142396  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.142920  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.142981  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143041  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143197  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.143340  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.143473  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.143487  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.143505  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143633  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.143628  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:44.143786  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.143913  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.144041  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:44.235010  176813 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:44.263174  176813 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:44.424330  176813 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:44.433495  176813 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:44.433573  176813 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:44.454080  176813 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:44.454106  176813 start.go:475] detecting cgroup driver to use...
	I1213 00:09:44.454173  176813 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:44.482370  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:44.499334  176813 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:44.499429  176813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:44.516413  176813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:44.529636  176813 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:44.638215  176813 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:44.774229  176813 docker.go:219] disabling docker service ...
	I1213 00:09:44.774304  176813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:44.790414  176813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:44.804909  176813 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:44.938205  176813 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:45.069429  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:45.085783  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:45.105487  176813 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1213 00:09:45.105558  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.117662  176813 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:45.117789  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.129560  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.139168  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.148466  176813 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:45.157626  176813 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:45.166608  176813 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:45.166675  176813 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:45.179666  176813 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:45.190356  176813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:45.366019  176813 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:45.549130  176813 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:45.549209  176813 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:45.554753  176813 start.go:543] Will wait 60s for crictl version
	I1213 00:09:45.554809  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:45.559452  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:45.605106  176813 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:45.605180  176813 ssh_runner.go:195] Run: crio --version
	I1213 00:09:45.654428  176813 ssh_runner.go:195] Run: crio --version
	I1213 00:09:45.711107  176813 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1213 00:09:45.712598  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:45.716022  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:45.716371  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:45.716405  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:45.716751  176813 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:45.722339  176813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:45.739528  176813 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1213 00:09:45.739594  176813 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:45.786963  176813 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1213 00:09:45.787044  176813 ssh_runner.go:195] Run: which lz4
	I1213 00:09:45.791462  176813 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:09:45.795923  176813 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:09:45.795952  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1213 00:09:43.228658  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:45.231385  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:45.721999  177307 pod_ready.go:92] pod "coredns-76f75df574-87nc6" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:45.722026  177307 pod_ready.go:81] duration metric: took 4.023377357s waiting for pod "coredns-76f75df574-87nc6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:45.722038  177307 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:47.744891  177307 pod_ready.go:102] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:48.255190  177307 pod_ready.go:92] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:48.255220  177307 pod_ready.go:81] duration metric: took 2.533174326s waiting for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.255233  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.263450  177307 pod_ready.go:92] pod "kube-apiserver-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:48.263477  177307 pod_ready.go:81] duration metric: took 8.236475ms waiting for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.263489  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.212975  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:48.213009  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:48.213033  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.303921  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:48.303963  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:48.435167  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.442421  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:48.442455  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:48.934740  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.941126  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:48.941161  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:49.434967  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:49.444960  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:49.445016  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:49.935234  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:49.941400  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I1213 00:09:49.951057  177409 api_server.go:141] control plane version: v1.28.4
	I1213 00:09:49.951094  177409 api_server.go:131] duration metric: took 6.518109828s to wait for apiserver health ...
	I1213 00:09:49.951105  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:09:49.951115  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:49.953198  177409 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:49.954914  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:49.989291  177409 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:47.527308  176813 crio.go:444] Took 1.735860 seconds to copy over tarball
	I1213 00:09:47.527390  176813 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:09:50.641162  176813 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113740813s)
	I1213 00:09:50.641195  176813 crio.go:451] Took 3.113856 seconds to extract the tarball
	I1213 00:09:50.641208  176813 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:09:50.683194  176813 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:50.729476  176813 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1213 00:09:50.729503  176813 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 00:09:50.729574  176813 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.729602  176813 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1213 00:09:50.729611  176813 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.729617  176813 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:50.729653  176813 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.729605  176813 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.729572  176813 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:50.729589  176813 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.730849  176813 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:50.730908  176813 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.730924  176813 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1213 00:09:50.730968  176813 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.730986  176813 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.730997  176813 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.730847  176813 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.731163  176813 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:47.235674  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:49.728030  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:50.051886  177409 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:50.069774  177409 system_pods.go:59] 8 kube-system pods found
	I1213 00:09:50.069817  177409 system_pods.go:61] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:50.069834  177409 system_pods.go:61] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:50.069849  177409 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:50.069862  177409 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:50.069875  177409 system_pods.go:61] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:50.069887  177409 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:50.069907  177409 system_pods.go:61] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:50.069919  177409 system_pods.go:61] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:50.069932  177409 system_pods.go:74] duration metric: took 18.020213ms to wait for pod list to return data ...
	I1213 00:09:50.069944  177409 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:50.073659  177409 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:50.073688  177409 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:50.073701  177409 node_conditions.go:105] duration metric: took 3.752016ms to run NodePressure ...
	I1213 00:09:50.073722  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:50.545413  177409 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:50.559389  177409 kubeadm.go:787] kubelet initialised
	I1213 00:09:50.559421  177409 kubeadm.go:788] duration metric: took 13.971205ms waiting for restarted kubelet to initialise ...
	I1213 00:09:50.559442  177409 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:50.568069  177409 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.580294  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.580327  177409 pod_ready.go:81] duration metric: took 12.225698ms waiting for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.580340  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.580348  177409 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.588859  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.588893  177409 pod_ready.go:81] duration metric: took 8.526992ms waiting for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.588909  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.588917  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.609726  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.609759  177409 pod_ready.go:81] duration metric: took 20.834011ms waiting for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.609773  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.609781  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.626724  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.626757  177409 pod_ready.go:81] duration metric: took 16.966751ms waiting for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.626770  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.626777  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.950893  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-proxy-zk4wl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.950927  177409 pod_ready.go:81] duration metric: took 324.143266ms waiting for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.950939  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-proxy-zk4wl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.950948  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:51.465200  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:51.465227  177409 pod_ready.go:81] duration metric: took 514.267219ms waiting for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:51.465242  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:51.465251  177409 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:52.111655  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:52.111690  177409 pod_ready.go:81] duration metric: took 646.423162ms waiting for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:52.111707  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:52.111716  177409 pod_ready.go:38] duration metric: took 1.552263211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:52.111735  177409 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:09:52.125125  177409 ops.go:34] apiserver oom_adj: -16
	I1213 00:09:52.125152  177409 kubeadm.go:640] restartCluster took 22.955643397s
	I1213 00:09:52.125175  177409 kubeadm.go:406] StartCluster complete in 23.016262726s
	I1213 00:09:52.125204  177409 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:52.125379  177409 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:09:52.128126  177409 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:52.226763  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:09:52.226947  177409 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:09:52.227030  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:09:52.227060  177409 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227071  177409 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227082  177409 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-743278"
	I1213 00:09:52.227088  177409 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-743278"
	W1213 00:09:52.227092  177409 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:09:52.227115  177409 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227154  177409 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-743278"
	I1213 00:09:52.227165  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	W1213 00:09:52.227173  177409 addons.go:240] addon metrics-server should already be in state true
	I1213 00:09:52.227252  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	I1213 00:09:52.227633  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227667  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.227633  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227698  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.227728  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227794  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.500974  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I1213 00:09:52.501503  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.502103  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.502130  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.502518  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.503096  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.503120  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I1213 00:09:52.503173  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.503249  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I1213 00:09:52.503460  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.503653  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.503952  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.503979  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.504117  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.504137  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.504326  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.504485  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.504680  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.504910  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.504957  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.508425  177409 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-743278"
	W1213 00:09:52.508466  177409 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:09:52.508495  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	I1213 00:09:52.508941  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.508989  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.520593  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35027
	I1213 00:09:52.521055  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37697
	I1213 00:09:52.521104  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.521443  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.521602  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.521630  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.521891  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.521917  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.521956  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.522162  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.522300  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.522506  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.523942  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:52.524208  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I1213 00:09:52.524419  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:52.612780  177409 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:09:52.524612  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.755661  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:09:52.941509  177409 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:52.941551  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:09:53.149407  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:52.881597  177409 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-743278" context rescaled to 1 replicas
	I1213 00:09:53.149472  177409 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:09:53.149496  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:09:52.884700  177409 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1213 00:09:52.756216  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:53.149523  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:53.149532  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:53.149484  177409 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:09:53.150147  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:53.153109  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.153288  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.360880  177409 out.go:177] * Verifying Kubernetes components...
	I1213 00:09:53.153717  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.153952  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.361036  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:50.301405  177307 pod_ready.go:102] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:52.803001  177307 pod_ready.go:102] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:53.361074  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:53.466451  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.361087  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.361322  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.466546  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:09:53.361364  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.361590  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:53.466661  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:53.466906  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.466963  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.467166  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.467266  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.489618  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41785
	I1213 00:09:53.490349  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:53.490932  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:53.490951  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:53.491365  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:53.491579  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:53.494223  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:53.495774  177409 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:09:53.495796  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:09:53.495816  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:53.499620  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.500099  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:53.500124  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.500405  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.500592  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.500734  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.501069  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.667878  177409 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-743278" to be "Ready" ...
	I1213 00:09:53.806167  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:09:53.806194  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:09:53.807837  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:09:53.808402  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:09:53.915171  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:09:53.915199  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:09:53.993146  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:09:53.993172  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:09:54.071008  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:09:50.865405  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.866538  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.867587  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1213 00:09:50.871289  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.872472  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.878541  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.882665  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:50.978405  176813 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1213 00:09:50.978458  176813 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.978527  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.038778  176813 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1213 00:09:51.038824  176813 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:51.038877  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.048868  176813 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1213 00:09:51.048925  176813 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1213 00:09:51.048983  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.054956  176813 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1213 00:09:51.055003  176813 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:51.055045  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.055045  176813 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1213 00:09:51.055133  176813 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:51.055162  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.069915  176813 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1213 00:09:51.069971  176813 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:51.070018  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.073904  176813 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1213 00:09:51.073955  176813 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:51.073990  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:51.074058  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:51.073997  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.074127  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1213 00:09:51.074173  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:51.074270  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1213 00:09:51.076866  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:51.216889  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:51.217032  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1213 00:09:51.217046  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1213 00:09:51.217118  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1213 00:09:51.217157  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1213 00:09:51.217213  176813 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.217804  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1213 00:09:51.217887  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1213 00:09:51.224310  176813 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1213 00:09:51.224329  176813 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.224373  176813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.270398  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1213 00:09:51.651719  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:53.599238  176813 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.374835203s)
	I1213 00:09:53.599269  176813 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1213 00:09:53.599323  176813 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.947557973s)
	I1213 00:09:53.599398  176813 cache_images.go:92] LoadImages completed in 2.869881827s
	W1213 00:09:53.599497  176813 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1213 00:09:53.599587  176813 ssh_runner.go:195] Run: crio config
	I1213 00:09:53.669735  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:09:53.669767  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:53.669792  176813 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:53.669814  176813 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-508612 NodeName:old-k8s-version-508612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1213 00:09:53.669991  176813 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-508612"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-508612
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:53.670076  176813 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-508612 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-508612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:09:53.670138  176813 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1213 00:09:53.680033  176813 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:53.680120  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:53.689595  176813 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1213 00:09:53.707167  176813 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:09:53.726978  176813 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1213 00:09:53.746191  176813 ssh_runner.go:195] Run: grep 192.168.39.70	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:53.750290  176813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:53.763369  176813 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612 for IP: 192.168.39.70
	I1213 00:09:53.763407  176813 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:53.763598  176813 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:53.763671  176813 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:53.763776  176813 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.key
	I1213 00:09:53.763855  176813 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.key.5467de6f
	I1213 00:09:53.763914  176813 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.key
	I1213 00:09:53.764055  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:53.764098  176813 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:53.764115  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:53.764158  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:53.764195  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:53.764238  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:53.764297  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:53.765315  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:53.793100  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:53.821187  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:53.847791  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:53.873741  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:53.903484  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:53.930420  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:53.958706  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:53.986236  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:54.011105  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:54.034546  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:54.070680  176813 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:54.093063  176813 ssh_runner.go:195] Run: openssl version
	I1213 00:09:54.100686  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:54.114647  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.121380  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.121463  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.128895  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:54.142335  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:54.155146  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.159746  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.159817  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.166153  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:54.176190  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:54.187049  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.191667  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.191737  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.197335  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:54.208790  176813 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:54.213230  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:54.219377  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:54.225539  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:54.232970  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:54.240720  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:54.247054  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:54.253486  176813 kubeadm.go:404] StartCluster: {Name:old-k8s-version-508612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-508612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:54.253600  176813 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:54.253674  176813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:54.303024  176813 cri.go:89] found id: ""
	I1213 00:09:54.303102  176813 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:54.317795  176813 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:54.317827  176813 kubeadm.go:636] restartCluster start
	I1213 00:09:54.317884  176813 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:54.331180  176813 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.332572  176813 kubeconfig.go:92] found "old-k8s-version-508612" server: "https://192.168.39.70:8443"
	I1213 00:09:54.335079  176813 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:54.346247  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.346292  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.362692  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.362720  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.362776  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.377570  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.878307  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.878384  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.891159  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:55.377679  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:55.377789  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:55.392860  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:52.229764  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:54.232636  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:55.162034  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.354143542s)
	I1213 00:09:55.162091  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.162103  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.162486  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.162503  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.162519  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.162536  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.162554  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.162887  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.162916  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.162961  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.255531  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.255561  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.255844  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.255867  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.686976  177409 node_ready.go:58] node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:55.814831  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.006392676s)
	I1213 00:09:55.814885  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.814905  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.815237  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.815300  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.815315  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.815325  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.815675  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.815693  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.815721  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.959447  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.88836869s)
	I1213 00:09:55.959502  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.959519  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.959909  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.959931  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.959941  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.959943  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.959950  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.960189  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.960205  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.960223  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.960226  177409 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-743278"
	I1213 00:09:55.962464  177409 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1213 00:09:54.302018  177307 pod_ready.go:92] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.302047  177307 pod_ready.go:81] duration metric: took 6.038549186s waiting for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.302061  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8k9x6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.308192  177307 pod_ready.go:92] pod "kube-proxy-8k9x6" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.308220  177307 pod_ready.go:81] duration metric: took 6.150452ms waiting for pod "kube-proxy-8k9x6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.308233  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.829614  177307 pod_ready.go:92] pod "kube-scheduler-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.829639  177307 pod_ready.go:81] duration metric: took 521.398817ms waiting for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.829649  177307 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:56.842731  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:55.964691  177409 addons.go:502] enable addons completed in 3.737755135s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1213 00:09:58.183398  177409 node_ready.go:58] node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:58.683603  177409 node_ready.go:49] node "default-k8s-diff-port-743278" has status "Ready":"True"
	I1213 00:09:58.683629  177409 node_ready.go:38] duration metric: took 5.01572337s waiting for node "default-k8s-diff-port-743278" to be "Ready" ...
	I1213 00:09:58.683638  177409 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:58.692636  177409 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:58.699084  177409 pod_ready.go:92] pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:58.699103  177409 pod_ready.go:81] duration metric: took 6.437856ms waiting for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:58.699111  177409 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:55.877904  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:55.877977  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:55.893729  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.377737  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:56.377803  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:56.389754  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.878464  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:56.878530  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:56.891849  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:57.377841  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:57.377929  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:57.389962  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:57.878384  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:57.878464  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:57.892518  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:58.378033  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:58.378119  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:58.391780  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:58.878309  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:58.878397  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:58.890677  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:59.378117  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:59.378239  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:59.390695  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:59.878240  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:59.878318  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:59.889688  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:00.378278  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:00.378376  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:00.390756  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.727591  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:58.729633  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:59.343431  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:01.344195  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:03.842943  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:00.718294  177409 pod_ready.go:102] pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:01.216472  177409 pod_ready.go:92] pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.216499  177409 pod_ready.go:81] duration metric: took 2.517381433s waiting for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.216513  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.221993  177409 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.222016  177409 pod_ready.go:81] duration metric: took 5.495703ms waiting for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.222026  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.227513  177409 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.227543  177409 pod_ready.go:81] duration metric: took 5.506889ms waiting for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.227555  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.485096  177409 pod_ready.go:92] pod "kube-proxy-zk4wl" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.485120  177409 pod_ready.go:81] duration metric: took 257.55839ms waiting for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.485131  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.886812  177409 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.886843  177409 pod_ready.go:81] duration metric: took 401.704296ms waiting for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.886860  177409 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:04.192658  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:00.878385  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:00.878514  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:00.891279  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:01.378010  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:01.378120  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:01.389897  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:01.878496  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:01.878581  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:01.890674  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:02.377657  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:02.377767  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:02.389165  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:02.877744  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:02.877886  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:02.889536  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:03.378083  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:03.378206  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:03.390009  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:03.878637  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:03.878757  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:03.891565  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:04.347244  176813 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:10:04.347324  176813 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:10:04.347339  176813 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:10:04.347431  176813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:10:04.391480  176813 cri.go:89] found id: ""
	I1213 00:10:04.391558  176813 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:10:04.407659  176813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:10:04.416545  176813 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:10:04.416616  176813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:10:04.425366  176813 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:10:04.425393  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:04.553907  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:05.643662  176813 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.089700044s)
	I1213 00:10:05.643704  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:01.230857  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:03.728598  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.729292  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.843723  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:07.844549  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:06.193695  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:08.194425  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.881077  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:05.983444  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:06.106543  176813 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:10:06.106637  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:06.120910  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:06.637294  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.137087  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.636989  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.659899  176813 api_server.go:72] duration metric: took 1.5533541s to wait for apiserver process to appear ...
	I1213 00:10:07.659925  176813 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:10:07.659949  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:08.229410  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.729881  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.344919  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.842700  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.692378  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.693810  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.660316  176813 api_server.go:269] stopped: https://192.168.39.70:8443/healthz: Get "https://192.168.39.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 00:10:12.660365  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:13.933418  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:10:13.933452  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:10:14.434114  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:14.442223  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1213 00:10:14.442261  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1213 00:10:14.934425  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:14.941188  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1213 00:10:14.941232  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1213 00:10:15.433614  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:15.441583  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1213 00:10:15.449631  176813 api_server.go:141] control plane version: v1.16.0
	I1213 00:10:15.449656  176813 api_server.go:131] duration metric: took 7.789725712s to wait for apiserver health ...
	I1213 00:10:15.449671  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:10:15.449677  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:10:15.451328  176813 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:10:15.452690  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:10:15.463558  176813 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:10:15.482997  176813 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:10:15.493646  176813 system_pods.go:59] 7 kube-system pods found
	I1213 00:10:15.493674  176813 system_pods.go:61] "coredns-5644d7b6d9-jnhmk" [38a0c948-a47e-4566-ad47-376943787ca1] Running
	I1213 00:10:15.493679  176813 system_pods.go:61] "etcd-old-k8s-version-508612" [80e685b2-cd70-4b7d-b00c-feda3ab1a509] Running
	I1213 00:10:15.493683  176813 system_pods.go:61] "kube-apiserver-old-k8s-version-508612" [657f1d7b-4fcb-44d4-96d3-3cc659fb9f0a] Running
	I1213 00:10:15.493688  176813 system_pods.go:61] "kube-controller-manager-old-k8s-version-508612" [d84a0927-7d19-4bba-8afd-b32877a9aee3] Running
	I1213 00:10:15.493692  176813 system_pods.go:61] "kube-proxy-fpd4j" [f2e9e528-576f-4339-b208-09ee5dbe7fcb] Running
	I1213 00:10:15.493696  176813 system_pods.go:61] "kube-scheduler-old-k8s-version-508612" [ce5ff03a-23bf-4cce-8795-58e412fc841c] Running
	I1213 00:10:15.493699  176813 system_pods.go:61] "storage-provisioner" [98a03a45-0cd3-40b4-9e66-6df14db5a848] Running
	I1213 00:10:15.493706  176813 system_pods.go:74] duration metric: took 10.683423ms to wait for pod list to return data ...
	I1213 00:10:15.493715  176813 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:10:15.498679  176813 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:10:15.498726  176813 node_conditions.go:123] node cpu capacity is 2
	I1213 00:10:15.498742  176813 node_conditions.go:105] duration metric: took 5.021318ms to run NodePressure ...
	I1213 00:10:15.498767  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:15.762302  176813 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:10:15.766665  176813 retry.go:31] will retry after 288.591747ms: kubelet not initialised
	I1213 00:10:13.228878  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.728396  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.343194  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:17.344384  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.193995  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:17.693024  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:19.693723  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:16.063637  176813 retry.go:31] will retry after 250.40677ms: kubelet not initialised
	I1213 00:10:16.320362  176813 retry.go:31] will retry after 283.670967ms: kubelet not initialised
	I1213 00:10:16.610834  176813 retry.go:31] will retry after 810.845397ms: kubelet not initialised
	I1213 00:10:17.427101  176813 retry.go:31] will retry after 1.00058932s: kubelet not initialised
	I1213 00:10:18.498625  176813 retry.go:31] will retry after 2.616819597s: kubelet not initialised
	I1213 00:10:18.226990  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:20.228211  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:19.345330  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:21.843959  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:22.192449  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.193001  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:21.120283  176813 retry.go:31] will retry after 1.883694522s: kubelet not initialised
	I1213 00:10:23.009312  176813 retry.go:31] will retry after 2.899361823s: kubelet not initialised
	I1213 00:10:22.727450  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.729952  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.342673  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:26.343639  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:28.842489  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:26.696279  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:29.194453  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:25.914801  176813 retry.go:31] will retry after 8.466541404s: kubelet not initialised
	I1213 00:10:27.227947  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:29.229430  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:30.843429  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:32.844457  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:31.692122  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:33.694437  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:34.391931  176813 retry.go:31] will retry after 6.686889894s: kubelet not initialised
	I1213 00:10:31.729052  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:33.730399  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:35.344029  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:37.842694  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:36.193427  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:38.193688  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:36.226978  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:38.227307  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.227797  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.343702  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:42.841574  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.693443  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:42.693668  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:41.084957  176813 retry.go:31] will retry after 18.68453817s: kubelet not initialised
	I1213 00:10:42.229433  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:44.728322  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:44.843586  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:46.844269  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:45.192582  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:47.691806  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.692545  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:47.227469  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.228908  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.343743  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.843948  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.694308  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.192629  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.728175  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.226904  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.342077  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:56.343115  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.345031  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:56.193137  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.693873  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:59.777116  176813 kubeadm.go:787] kubelet initialised
	I1213 00:10:59.777150  176813 kubeadm.go:788] duration metric: took 44.014819539s waiting for restarted kubelet to initialise ...
	I1213 00:10:59.777162  176813 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:10:59.782802  176813 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.788307  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.788348  176813 pod_ready.go:81] duration metric: took 5.514049ms waiting for pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.788356  176813 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.792569  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.792588  176813 pod_ready.go:81] duration metric: took 4.224934ms waiting for pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.792599  176813 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.797096  176813 pod_ready.go:92] pod "etcd-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.797119  176813 pod_ready.go:81] duration metric: took 4.508662ms waiting for pod "etcd-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.797130  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.801790  176813 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.801811  176813 pod_ready.go:81] duration metric: took 4.673597ms waiting for pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.801818  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.175474  176813 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.175504  176813 pod_ready.go:81] duration metric: took 373.677737ms waiting for pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.175523  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fpd4j" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.576344  176813 pod_ready.go:92] pod "kube-proxy-fpd4j" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.576373  176813 pod_ready.go:81] duration metric: took 400.842191ms waiting for pod "kube-proxy-fpd4j" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.576387  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:56.229570  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.728770  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:00.843201  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.343182  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:01.199677  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.201427  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:00.976886  176813 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.976908  176813 pod_ready.go:81] duration metric: took 400.512629ms waiting for pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.976920  176813 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:03.283224  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.284030  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:01.229393  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.728570  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.843264  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.343228  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.694505  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.197100  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:07.285680  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:09.786591  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:06.227705  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.229577  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.727791  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.343300  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.843162  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.695161  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:13.195051  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.285865  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:14.785354  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.728656  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:15.227890  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:14.844312  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:16.847144  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:15.692597  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:18.193383  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:17.284986  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.786139  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:17.229608  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.728503  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.344056  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:21.843070  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:23.844051  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:20.692417  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.692912  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.693204  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.285292  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.784342  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.227286  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.228831  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.342758  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:28.347392  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.693376  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:28.696971  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:27.284643  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:29.284776  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.727796  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:29.227690  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:30.843482  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:32.844695  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.191962  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.192585  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.285494  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.285863  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.791234  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.727767  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.728047  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.342092  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:37.342356  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.196354  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:37.693679  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:38.285349  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.785094  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:36.228379  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:38.728361  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.728752  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:39.342944  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:41.343229  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:43.842669  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.192636  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:42.696348  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:43.284960  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.783972  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:42.730357  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.228371  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.844034  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:48.345622  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.199304  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.692399  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.692916  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.784062  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.784533  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.232607  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.727709  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:50.842207  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:52.845393  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:52.193829  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:54.694220  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:51.784671  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:54.284709  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:51.728053  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:53.729081  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:55.342783  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:57.343274  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.694508  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:59.194904  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.285342  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:58.783460  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.227395  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:58.231694  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:00.727822  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:59.343618  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.842326  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.842653  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.197290  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.694223  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.285393  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.784968  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.786110  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:02.728596  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.227456  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.843038  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.342838  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.695124  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.192630  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.284460  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.284768  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:07.728787  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.227036  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.344532  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.841921  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.193483  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.196550  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.693706  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.784036  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.784471  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.227952  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.228178  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.842965  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:17.343683  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:17.193131  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.692561  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:16.785596  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.285058  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:16.726702  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:18.728269  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.843031  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:22.343417  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:22.191869  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:24.193973  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:21.783890  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:23.784341  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:25.784521  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:21.227269  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:23.227691  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:25.228239  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:24.343805  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:26.346354  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:28.844254  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:26.693293  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:29.193583  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:27.784904  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:30.285014  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:27.727045  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:29.728691  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:31.346007  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:33.843421  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:31.194160  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:33.691639  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:32.784701  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:35.284958  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:32.226511  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:34.228892  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:36.342384  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.343546  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:35.694257  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.191620  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:37.286143  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:39.783802  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:36.727306  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.728168  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:40.850557  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.342393  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:40.192328  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:42.192749  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:44.693406  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:41.784411  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.789293  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:41.228591  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.728133  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:45.842401  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:47.843839  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:47.193847  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:49.692840  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:46.284387  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:48.284692  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:50.285419  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:46.228594  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:48.728575  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:50.343073  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:52.843034  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:51.692895  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.196344  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:52.785093  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.785238  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:51.226704  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:53.228359  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:55.228418  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.847060  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.345339  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:56.693854  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.191098  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.285101  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.783955  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.727063  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.727437  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.847179  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:02.343433  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.192388  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.693056  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.784055  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.784840  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.727635  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.727705  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:04.346684  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.843294  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.192928  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.693240  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.284092  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.784303  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:10.784971  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.228019  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.727726  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:09.342622  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:11.343211  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.843894  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:10.698298  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.191387  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.285854  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:15.790625  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:11.228300  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.730143  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:16.343574  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:18.343896  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:15.195797  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:17.694620  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:18.283712  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:20.284937  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:16.227280  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:17.419163  177122 pod_ready.go:81] duration metric: took 4m0.000090271s waiting for pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace to be "Ready" ...
	E1213 00:13:17.419207  177122 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:13:17.419233  177122 pod_ready.go:38] duration metric: took 4m12.64031929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:17.419260  177122 kubeadm.go:640] restartCluster took 4m32.91279931s
	W1213 00:13:17.419346  177122 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:13:17.419387  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:13:20.847802  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:23.342501  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:20.193039  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:22.693730  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:22.285212  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:24.783901  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:25.343029  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:27.842840  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:25.194640  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:27.692515  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:29.695543  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:26.785503  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:29.284618  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:33.603614  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.184189808s)
	I1213 00:13:33.603692  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:13:33.617573  177122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:13:33.626779  177122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:13:33.636160  177122 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:13:33.636214  177122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 00:13:33.694141  177122 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1213 00:13:33.694267  177122 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:13:33.853582  177122 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:13:33.853718  177122 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:13:33.853992  177122 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:13:34.092007  177122 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:13:29.844324  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:32.345926  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.093975  177122 out.go:204]   - Generating certificates and keys ...
	I1213 00:13:34.094125  177122 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:13:34.094198  177122 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:13:34.094297  177122 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:13:34.094492  177122 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:13:34.095287  177122 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:13:34.096041  177122 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:13:34.096841  177122 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:13:34.097551  177122 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:13:34.098399  177122 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:13:34.099122  177122 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:13:34.099844  177122 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:13:34.099929  177122 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:13:34.191305  177122 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:13:34.425778  177122 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:13:34.601958  177122 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:13:34.747536  177122 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:13:34.748230  177122 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:13:34.750840  177122 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:13:32.193239  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.691928  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:31.286291  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:33.786852  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.752409  177122 out.go:204]   - Booting up control plane ...
	I1213 00:13:34.752562  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:13:34.752659  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:13:34.752994  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:13:34.772157  177122 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:13:34.774789  177122 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:13:34.774854  177122 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1213 00:13:34.926546  177122 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:13:34.346782  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.847723  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.694243  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:39.195903  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.284979  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:38.285685  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:40.286174  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:39.345989  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:41.353093  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:43.847024  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:43.435528  177122 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506764 seconds
	I1213 00:13:43.435691  177122 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:13:43.454840  177122 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:13:43.997250  177122 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:13:43.997537  177122 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-335807 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 00:13:44.513097  177122 kubeadm.go:322] [bootstrap-token] Using token: a9yhsz.n5p4z1j5jkbj68ov
	I1213 00:13:44.514695  177122 out.go:204]   - Configuring RBAC rules ...
	I1213 00:13:44.514836  177122 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:13:44.520134  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 00:13:44.528726  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:13:44.535029  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:13:44.539162  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:13:44.545990  177122 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:13:44.561964  177122 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 00:13:44.831402  177122 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:13:44.927500  177122 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:13:44.931294  177122 kubeadm.go:322] 
	I1213 00:13:44.931371  177122 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:13:44.931389  177122 kubeadm.go:322] 
	I1213 00:13:44.931500  177122 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:13:44.931509  177122 kubeadm.go:322] 
	I1213 00:13:44.931535  177122 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:13:44.931605  177122 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:13:44.931674  177122 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:13:44.931681  177122 kubeadm.go:322] 
	I1213 00:13:44.931743  177122 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1213 00:13:44.931752  177122 kubeadm.go:322] 
	I1213 00:13:44.931838  177122 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 00:13:44.931861  177122 kubeadm.go:322] 
	I1213 00:13:44.931938  177122 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:13:44.932026  177122 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:13:44.932139  177122 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:13:44.932151  177122 kubeadm.go:322] 
	I1213 00:13:44.932260  177122 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 00:13:44.932367  177122 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:13:44.932386  177122 kubeadm.go:322] 
	I1213 00:13:44.932533  177122 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token a9yhsz.n5p4z1j5jkbj68ov \
	I1213 00:13:44.932702  177122 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:13:44.932726  177122 kubeadm.go:322] 	--control-plane 
	I1213 00:13:44.932730  177122 kubeadm.go:322] 
	I1213 00:13:44.932797  177122 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:13:44.932808  177122 kubeadm.go:322] 
	I1213 00:13:44.932927  177122 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token a9yhsz.n5p4z1j5jkbj68ov \
	I1213 00:13:44.933074  177122 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:13:44.933953  177122 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:13:44.934004  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:13:44.934026  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:13:44.935893  177122 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:13:41.694337  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.192303  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:42.783865  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.784599  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.937355  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:13:44.961248  177122 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:13:45.005684  177122 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:13:45.005758  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.005789  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=embed-certs-335807 minikube.k8s.io/updated_at=2023_12_13T00_13_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.117205  177122 ops.go:34] apiserver oom_adj: -16
	I1213 00:13:45.402961  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.532503  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:46.343927  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:48.843509  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.197988  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:48.691611  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.785080  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:49.283316  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.138647  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:46.639104  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:47.139139  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:47.638244  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:48.138634  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:48.638352  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:49.138616  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:49.639061  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:50.138633  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:50.639013  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:51.343525  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:53.345044  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:50.693254  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:52.693448  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:51.286352  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:53.782966  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:55.786792  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:51.138430  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:51.638340  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:52.138696  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:52.638727  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:53.138509  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:53.639092  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:54.138153  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:54.638781  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:55.138875  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:55.639166  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:56.138534  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:56.638726  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:57.138427  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:57.273101  177122 kubeadm.go:1088] duration metric: took 12.26741009s to wait for elevateKubeSystemPrivileges.
	I1213 00:13:57.273139  177122 kubeadm.go:406] StartCluster complete in 5m12.825293837s
	I1213 00:13:57.273163  177122 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:13:57.273294  177122 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:13:57.275845  177122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:13:57.276142  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:13:57.276488  177122 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:13:57.276665  177122 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:13:57.276739  177122 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-335807"
	I1213 00:13:57.276756  177122 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-335807"
	W1213 00:13:57.276765  177122 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:13:57.276812  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.277245  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277283  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.277356  177122 addons.go:69] Setting default-storageclass=true in profile "embed-certs-335807"
	I1213 00:13:57.277374  177122 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-335807"
	I1213 00:13:57.277528  177122 addons.go:69] Setting metrics-server=true in profile "embed-certs-335807"
	I1213 00:13:57.277545  177122 addons.go:231] Setting addon metrics-server=true in "embed-certs-335807"
	W1213 00:13:57.277552  177122 addons.go:240] addon metrics-server should already be in state true
	I1213 00:13:57.277599  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.277791  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277820  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.277923  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277945  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.296571  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I1213 00:13:57.299879  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I1213 00:13:57.299897  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I1213 00:13:57.300251  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.300833  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.300906  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.300923  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.300935  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.301294  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.301309  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.301330  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.301419  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.301427  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.301497  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.301728  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.301774  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.302199  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.302232  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.303181  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.303222  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.304586  177122 addons.go:231] Setting addon default-storageclass=true in "embed-certs-335807"
	W1213 00:13:57.304601  177122 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:13:57.304620  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.304860  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.304891  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.323403  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I1213 00:13:57.324103  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.324810  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I1213 00:13:57.324961  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.324985  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.325197  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.325332  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.325518  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.325910  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.325935  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.326524  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.326731  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.328013  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.329895  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.332188  177122 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:13:57.333332  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1213 00:13:57.333375  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:13:57.334952  177122 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:13:57.333392  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:13:57.333795  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.337096  177122 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:13:57.337110  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:13:57.337124  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.337162  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.337564  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.337585  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.339793  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.340514  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.340572  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.340821  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.341606  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.341657  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.341829  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.342023  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.342206  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.342411  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.347105  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.347512  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.347538  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.347782  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.347974  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.348108  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.348213  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.359690  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1213 00:13:57.360385  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.361065  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.361093  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.361567  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.361777  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.363693  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.364020  177122 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:13:57.364037  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:13:57.364056  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.367409  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.367874  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.367904  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.368086  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.368287  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.368470  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.368619  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.399353  177122 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-335807" context rescaled to 1 replicas
	I1213 00:13:57.399391  177122 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.249 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:13:57.401371  177122 out.go:177] * Verifying Kubernetes components...
	I1213 00:13:54.829811  177307 pod_ready.go:81] duration metric: took 4m0.000140793s waiting for pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace to be "Ready" ...
	E1213 00:13:54.829844  177307 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:13:54.829878  177307 pod_ready.go:38] duration metric: took 4m13.138964255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:54.829912  177307 kubeadm.go:640] restartCluster took 4m33.090839538s
	W1213 00:13:54.829977  177307 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:13:54.830014  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:13:55.192745  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:57.193249  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:59.196279  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:57.403699  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:13:57.551632  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:13:57.551656  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:13:57.590132  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:13:57.617477  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:13:57.648290  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:13:57.648324  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:13:57.724394  177122 node_ready.go:35] waiting up to 6m0s for node "embed-certs-335807" to be "Ready" ...
	I1213 00:13:57.724498  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:13:57.751666  177122 node_ready.go:49] node "embed-certs-335807" has status "Ready":"True"
	I1213 00:13:57.751704  177122 node_ready.go:38] duration metric: took 27.274531ms waiting for node "embed-certs-335807" to be "Ready" ...
	I1213 00:13:57.751718  177122 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:57.764283  177122 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace to be "Ready" ...
	I1213 00:13:57.835941  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:13:57.835968  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:13:58.040994  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:13:59.867561  177122 pod_ready.go:102] pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.210713  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.620538044s)
	I1213 00:14:00.210745  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.593229432s)
	I1213 00:14:00.210763  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210775  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210805  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210846  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210892  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.169863052s)
	I1213 00:14:00.210932  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210951  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210803  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.48627637s)
	I1213 00:14:00.211241  177122 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1213 00:14:00.211428  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.211467  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.211477  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.211486  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.211496  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.211804  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.211843  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.211851  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.211860  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.211869  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.211979  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.212025  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.212033  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.212251  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.213205  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213214  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.213221  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213253  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213269  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213287  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.213300  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.213565  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213592  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213600  177122 addons.go:467] Verifying addon metrics-server=true in "embed-certs-335807"
	I1213 00:14:00.213633  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.231892  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.231921  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.232238  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.232257  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.234089  177122 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1213 00:13:58.285584  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.286469  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.235676  177122 addons.go:502] enable addons completed in 2.959016059s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1213 00:14:01.848071  177122 pod_ready.go:92] pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.848093  177122 pod_ready.go:81] duration metric: took 4.083780035s waiting for pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.848101  177122 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.854062  177122 pod_ready.go:92] pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.854082  177122 pod_ready.go:81] duration metric: took 5.975194ms waiting for pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.854090  177122 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.864033  177122 pod_ready.go:92] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.864060  177122 pod_ready.go:81] duration metric: took 9.963384ms waiting for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.864072  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.875960  177122 pod_ready.go:92] pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.875990  177122 pod_ready.go:81] duration metric: took 11.909604ms waiting for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.876004  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.882084  177122 pod_ready.go:92] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.882107  177122 pod_ready.go:81] duration metric: took 6.092978ms waiting for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.882118  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ccq47" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:02.645363  177122 pod_ready.go:92] pod "kube-proxy-ccq47" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:02.645389  177122 pod_ready.go:81] duration metric: took 763.264171ms waiting for pod "kube-proxy-ccq47" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:02.645399  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:03.045476  177122 pod_ready.go:92] pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:03.045502  177122 pod_ready.go:81] duration metric: took 400.097321ms waiting for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:03.045513  177122 pod_ready.go:38] duration metric: took 5.293782674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:03.045530  177122 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:03.045584  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:03.062802  177122 api_server.go:72] duration metric: took 5.663381439s to wait for apiserver process to appear ...
	I1213 00:14:03.062827  177122 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:03.062848  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:14:03.068482  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 200:
	ok
	I1213 00:14:03.069909  177122 api_server.go:141] control plane version: v1.28.4
	I1213 00:14:03.069934  177122 api_server.go:131] duration metric: took 7.099309ms to wait for apiserver health ...
	I1213 00:14:03.069943  177122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:03.248993  177122 system_pods.go:59] 9 kube-system pods found
	I1213 00:14:03.249025  177122 system_pods.go:61] "coredns-5dd5756b68-gs4kb" [d4b86e83-a0a1-4bf8-958e-e154e91f47ef] Running
	I1213 00:14:03.249032  177122 system_pods.go:61] "coredns-5dd5756b68-t92hd" [1ad2dcb3-bcda-42af-b4ce-0d95bba0315f] Running
	I1213 00:14:03.249039  177122 system_pods.go:61] "etcd-embed-certs-335807" [aa5222a7-5670-4550-9d65-6db2095898be] Running
	I1213 00:14:03.249045  177122 system_pods.go:61] "kube-apiserver-embed-certs-335807" [ca0e9de9-8f6a-4bae-b1d1-04b7c0c3cd4c] Running
	I1213 00:14:03.249052  177122 system_pods.go:61] "kube-controller-manager-embed-certs-335807" [f5563afe-3d6c-4b44-b0c0-765da451fd88] Running
	I1213 00:14:03.249057  177122 system_pods.go:61] "kube-proxy-ccq47" [68f3c55f-175e-40af-a769-65c859d5012d] Running
	I1213 00:14:03.249063  177122 system_pods.go:61] "kube-scheduler-embed-certs-335807" [c989cf08-80e9-4b0f-b0e4-f840c6259ace] Running
	I1213 00:14:03.249074  177122 system_pods.go:61] "metrics-server-57f55c9bc5-z7qb4" [b33959c3-63b7-4a81-adda-6d2971036e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:03.249082  177122 system_pods.go:61] "storage-provisioner" [816660d7-a041-4695-b7da-d977b8891935] Running
	I1213 00:14:03.249095  177122 system_pods.go:74] duration metric: took 179.144496ms to wait for pod list to return data ...
	I1213 00:14:03.249106  177122 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:03.444557  177122 default_sa.go:45] found service account: "default"
	I1213 00:14:03.444591  177122 default_sa.go:55] duration metric: took 195.469108ms for default service account to be created ...
	I1213 00:14:03.444603  177122 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:03.651685  177122 system_pods.go:86] 9 kube-system pods found
	I1213 00:14:03.651714  177122 system_pods.go:89] "coredns-5dd5756b68-gs4kb" [d4b86e83-a0a1-4bf8-958e-e154e91f47ef] Running
	I1213 00:14:03.651719  177122 system_pods.go:89] "coredns-5dd5756b68-t92hd" [1ad2dcb3-bcda-42af-b4ce-0d95bba0315f] Running
	I1213 00:14:03.651723  177122 system_pods.go:89] "etcd-embed-certs-335807" [aa5222a7-5670-4550-9d65-6db2095898be] Running
	I1213 00:14:03.651727  177122 system_pods.go:89] "kube-apiserver-embed-certs-335807" [ca0e9de9-8f6a-4bae-b1d1-04b7c0c3cd4c] Running
	I1213 00:14:03.651731  177122 system_pods.go:89] "kube-controller-manager-embed-certs-335807" [f5563afe-3d6c-4b44-b0c0-765da451fd88] Running
	I1213 00:14:03.651735  177122 system_pods.go:89] "kube-proxy-ccq47" [68f3c55f-175e-40af-a769-65c859d5012d] Running
	I1213 00:14:03.651739  177122 system_pods.go:89] "kube-scheduler-embed-certs-335807" [c989cf08-80e9-4b0f-b0e4-f840c6259ace] Running
	I1213 00:14:03.651745  177122 system_pods.go:89] "metrics-server-57f55c9bc5-z7qb4" [b33959c3-63b7-4a81-adda-6d2971036e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:03.651750  177122 system_pods.go:89] "storage-provisioner" [816660d7-a041-4695-b7da-d977b8891935] Running
	I1213 00:14:03.651758  177122 system_pods.go:126] duration metric: took 207.148805ms to wait for k8s-apps to be running ...
	I1213 00:14:03.651764  177122 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:03.651814  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:03.666068  177122 system_svc.go:56] duration metric: took 14.292973ms WaitForService to wait for kubelet.
	I1213 00:14:03.666093  177122 kubeadm.go:581] duration metric: took 6.266680553s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:03.666109  177122 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:03.845399  177122 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:03.845431  177122 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:03.845447  177122 node_conditions.go:105] duration metric: took 179.332019ms to run NodePressure ...
	I1213 00:14:03.845462  177122 start.go:228] waiting for startup goroutines ...
	I1213 00:14:03.845470  177122 start.go:233] waiting for cluster config update ...
	I1213 00:14:03.845482  177122 start.go:242] writing updated cluster config ...
	I1213 00:14:03.845850  177122 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:03.898374  177122 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1213 00:14:03.900465  177122 out.go:177] * Done! kubectl is now configured to use "embed-certs-335807" cluster and "default" namespace by default
	I1213 00:14:01.693061  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:01.886947  177409 pod_ready.go:81] duration metric: took 4m0.000066225s waiting for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	E1213 00:14:01.886997  177409 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:14:01.887010  177409 pod_ready.go:38] duration metric: took 4m3.203360525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:01.887056  177409 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:01.887093  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:01.887156  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:01.956004  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:01.956029  177409 cri.go:89] found id: ""
	I1213 00:14:01.956038  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:01.956096  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:01.961314  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:01.961388  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:02.001797  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:02.001825  177409 cri.go:89] found id: ""
	I1213 00:14:02.001835  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:02.001881  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.007127  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:02.007193  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:02.050259  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:02.050283  177409 cri.go:89] found id: ""
	I1213 00:14:02.050294  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:02.050347  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.056086  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:02.056147  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:02.125159  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:02.125189  177409 cri.go:89] found id: ""
	I1213 00:14:02.125199  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:02.125261  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.129874  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:02.129939  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:02.175027  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:02.175058  177409 cri.go:89] found id: ""
	I1213 00:14:02.175067  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:02.175127  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.180444  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:02.180515  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:02.219578  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:02.219603  177409 cri.go:89] found id: ""
	I1213 00:14:02.219610  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:02.219664  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.223644  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:02.223693  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:02.260542  177409 cri.go:89] found id: ""
	I1213 00:14:02.260567  177409 logs.go:284] 0 containers: []
	W1213 00:14:02.260575  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:02.260583  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:02.260656  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:02.304058  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:02.304082  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:02.304090  177409 cri.go:89] found id: ""
	I1213 00:14:02.304100  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:02.304159  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.308606  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.312421  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:02.312473  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:02.356415  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:02.356460  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:02.405870  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:02.405902  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:02.876461  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:02.876508  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:03.037302  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:03.037334  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:03.098244  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:03.098273  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:03.163681  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:03.163712  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:03.216883  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:03.216912  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:03.267979  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:03.268011  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:03.309364  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:03.309394  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:03.352427  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:03.352479  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:03.406508  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:03.406547  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:03.449959  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:03.449985  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:02.784516  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:05.284536  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.408895  177307 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.578851358s)
	I1213 00:14:09.408954  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:09.422044  177307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:14:09.430579  177307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:14:09.438689  177307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:14:09.438727  177307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 00:14:09.493519  177307 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1213 00:14:09.493657  177307 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:14:09.648151  177307 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:14:09.648294  177307 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:14:09.648489  177307 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:14:09.908199  177307 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:14:05.974125  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:05.992335  177409 api_server.go:72] duration metric: took 4m12.842684139s to wait for apiserver process to appear ...
	I1213 00:14:05.992364  177409 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:05.992411  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:05.992491  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:06.037770  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:06.037796  177409 cri.go:89] found id: ""
	I1213 00:14:06.037805  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:06.037863  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.042949  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:06.043016  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:06.090863  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:06.090888  177409 cri.go:89] found id: ""
	I1213 00:14:06.090897  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:06.090951  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.103859  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:06.103925  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:06.156957  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:06.156982  177409 cri.go:89] found id: ""
	I1213 00:14:06.156992  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:06.157053  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.162170  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:06.162220  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:06.204839  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:06.204867  177409 cri.go:89] found id: ""
	I1213 00:14:06.204877  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:06.204942  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.210221  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:06.210287  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:06.255881  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:06.255909  177409 cri.go:89] found id: ""
	I1213 00:14:06.255918  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:06.255984  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.260853  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:06.260924  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:06.308377  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:06.308400  177409 cri.go:89] found id: ""
	I1213 00:14:06.308413  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:06.308493  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.315028  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:06.315111  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:06.365453  177409 cri.go:89] found id: ""
	I1213 00:14:06.365484  177409 logs.go:284] 0 containers: []
	W1213 00:14:06.365494  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:06.365507  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:06.365568  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:06.423520  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:06.423545  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:06.423560  177409 cri.go:89] found id: ""
	I1213 00:14:06.423571  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:06.423628  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.429613  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.434283  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:06.434310  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:06.571329  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:06.571375  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:06.613274  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:06.613307  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:06.673407  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:06.673455  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:06.688886  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:06.688933  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:06.733130  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:06.733162  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:06.780131  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:06.780161  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:06.827465  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:06.827500  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:06.880245  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:06.880286  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:06.919735  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:06.919764  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:06.974039  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:06.974074  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:07.400452  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:07.400491  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:07.456759  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:07.456789  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.010686  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:14:10.017803  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I1213 00:14:10.019196  177409 api_server.go:141] control plane version: v1.28.4
	I1213 00:14:10.019216  177409 api_server.go:131] duration metric: took 4.026844615s to wait for apiserver health ...
	I1213 00:14:10.019225  177409 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:10.019251  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:10.019303  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:07.784301  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.785226  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.910151  177307 out.go:204]   - Generating certificates and keys ...
	I1213 00:14:09.910259  177307 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:14:09.910339  177307 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:14:09.910444  177307 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:14:09.910527  177307 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:14:09.910616  177307 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:14:09.910662  177307 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:14:09.910713  177307 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:14:09.910791  177307 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:14:09.910892  177307 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:14:09.911041  177307 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:14:09.911107  177307 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:14:09.911186  177307 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:14:10.262533  177307 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:14:10.508123  177307 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 00:14:10.766822  177307 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:14:10.866565  177307 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:14:11.206659  177307 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:14:11.207238  177307 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:14:11.210018  177307 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:14:10.061672  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.061699  177409 cri.go:89] found id: ""
	I1213 00:14:10.061708  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:10.061769  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.066426  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:10.066491  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:10.107949  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:10.107978  177409 cri.go:89] found id: ""
	I1213 00:14:10.107994  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:10.108053  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.112321  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:10.112393  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:10.169082  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:10.169110  177409 cri.go:89] found id: ""
	I1213 00:14:10.169120  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:10.169175  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.174172  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:10.174225  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:10.220290  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:10.220313  177409 cri.go:89] found id: ""
	I1213 00:14:10.220326  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:10.220384  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.225241  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:10.225310  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:10.271312  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:10.271336  177409 cri.go:89] found id: ""
	I1213 00:14:10.271345  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:10.271401  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.275974  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:10.276049  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:10.324262  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:10.324288  177409 cri.go:89] found id: ""
	I1213 00:14:10.324299  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:10.324360  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.329065  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:10.329130  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:10.375611  177409 cri.go:89] found id: ""
	I1213 00:14:10.375640  177409 logs.go:284] 0 containers: []
	W1213 00:14:10.375648  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:10.375654  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:10.375725  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:10.420778  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:10.420807  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:10.420812  177409 cri.go:89] found id: ""
	I1213 00:14:10.420819  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:10.420866  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.425676  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.430150  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:10.430180  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:10.486314  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:10.486351  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:10.500915  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:10.500946  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:10.543073  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:10.543108  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:10.584779  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:10.584814  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:10.629824  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:10.629852  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:10.756816  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:10.756857  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.807506  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:10.807536  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:10.849398  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:10.849436  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:10.911470  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:10.911508  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:11.288892  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:11.288941  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:11.361299  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:11.361347  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:11.407800  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:11.407850  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:13.965440  177409 system_pods.go:59] 8 kube-system pods found
	I1213 00:14:13.965477  177409 system_pods.go:61] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running
	I1213 00:14:13.965485  177409 system_pods.go:61] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running
	I1213 00:14:13.965493  177409 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running
	I1213 00:14:13.965500  177409 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running
	I1213 00:14:13.965505  177409 system_pods.go:61] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running
	I1213 00:14:13.965509  177409 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running
	I1213 00:14:13.965518  177409 system_pods.go:61] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:13.965528  177409 system_pods.go:61] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running
	I1213 00:14:13.965538  177409 system_pods.go:74] duration metric: took 3.946305195s to wait for pod list to return data ...
	I1213 00:14:13.965548  177409 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:13.969074  177409 default_sa.go:45] found service account: "default"
	I1213 00:14:13.969103  177409 default_sa.go:55] duration metric: took 3.543208ms for default service account to be created ...
	I1213 00:14:13.969114  177409 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:13.977167  177409 system_pods.go:86] 8 kube-system pods found
	I1213 00:14:13.977201  177409 system_pods.go:89] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running
	I1213 00:14:13.977211  177409 system_pods.go:89] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running
	I1213 00:14:13.977219  177409 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running
	I1213 00:14:13.977226  177409 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running
	I1213 00:14:13.977232  177409 system_pods.go:89] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running
	I1213 00:14:13.977238  177409 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running
	I1213 00:14:13.977249  177409 system_pods.go:89] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:13.977257  177409 system_pods.go:89] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running
	I1213 00:14:13.977272  177409 system_pods.go:126] duration metric: took 8.1502ms to wait for k8s-apps to be running ...
	I1213 00:14:13.977288  177409 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:13.977342  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:13.996304  177409 system_svc.go:56] duration metric: took 19.006856ms WaitForService to wait for kubelet.
	I1213 00:14:13.996340  177409 kubeadm.go:581] duration metric: took 4m20.846697962s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:13.996374  177409 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:14.000473  177409 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:14.000505  177409 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:14.000518  177409 node_conditions.go:105] duration metric: took 4.137212ms to run NodePressure ...
	I1213 00:14:14.000534  177409 start.go:228] waiting for startup goroutines ...
	I1213 00:14:14.000544  177409 start.go:233] waiting for cluster config update ...
	I1213 00:14:14.000561  177409 start.go:242] writing updated cluster config ...
	I1213 00:14:14.000901  177409 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:14.059785  177409 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1213 00:14:14.062155  177409 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-743278" cluster and "default" namespace by default
	I1213 00:14:11.212405  177307 out.go:204]   - Booting up control plane ...
	I1213 00:14:11.212538  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:14:11.213865  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:14:11.215312  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:14:11.235356  177307 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:14:11.236645  177307 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:14:11.236755  177307 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1213 00:14:11.385788  177307 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:14:12.284994  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:14.784159  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:19.387966  177307 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002219 seconds
	I1213 00:14:19.402873  177307 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:14:19.424220  177307 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:14:19.954243  177307 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:14:19.954453  177307 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-143586 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 00:14:20.468986  177307 kubeadm.go:322] [bootstrap-token] Using token: nss44e.j85t1ilri9kvvn0e
	I1213 00:14:16.785364  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:19.284214  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:20.470732  177307 out.go:204]   - Configuring RBAC rules ...
	I1213 00:14:20.470866  177307 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:14:20.479490  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 00:14:20.488098  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:14:20.491874  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:14:20.496891  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:14:20.506058  177307 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:14:20.523032  177307 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 00:14:20.796465  177307 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:14:20.892018  177307 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:14:20.892049  177307 kubeadm.go:322] 
	I1213 00:14:20.892159  177307 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:14:20.892185  177307 kubeadm.go:322] 
	I1213 00:14:20.892284  177307 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:14:20.892296  177307 kubeadm.go:322] 
	I1213 00:14:20.892338  177307 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:14:20.892421  177307 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:14:20.892512  177307 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:14:20.892529  177307 kubeadm.go:322] 
	I1213 00:14:20.892620  177307 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1213 00:14:20.892648  177307 kubeadm.go:322] 
	I1213 00:14:20.892734  177307 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 00:14:20.892745  177307 kubeadm.go:322] 
	I1213 00:14:20.892807  177307 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:14:20.892938  177307 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:14:20.893057  177307 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:14:20.893072  177307 kubeadm.go:322] 
	I1213 00:14:20.893182  177307 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 00:14:20.893286  177307 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:14:20.893307  177307 kubeadm.go:322] 
	I1213 00:14:20.893446  177307 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nss44e.j85t1ilri9kvvn0e \
	I1213 00:14:20.893588  177307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:14:20.893625  177307 kubeadm.go:322] 	--control-plane 
	I1213 00:14:20.893634  177307 kubeadm.go:322] 
	I1213 00:14:20.893740  177307 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:14:20.893752  177307 kubeadm.go:322] 
	I1213 00:14:20.893877  177307 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nss44e.j85t1ilri9kvvn0e \
	I1213 00:14:20.894017  177307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:14:20.895217  177307 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:14:20.895249  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:14:20.895261  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:14:20.897262  177307 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:14:20.898838  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:14:20.933446  177307 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:14:20.985336  177307 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:14:20.985435  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:20.985458  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=no-preload-143586 minikube.k8s.io/updated_at=2023_12_13T00_14_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.062513  177307 ops.go:34] apiserver oom_adj: -16
	I1213 00:14:21.374568  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.482135  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:22.088971  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:22.588816  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:23.088960  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:23.588701  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:24.088568  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.783473  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:23.784019  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:25.785712  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:24.588803  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:25.088983  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:25.589097  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:26.088561  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:26.589160  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:27.088601  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:27.588337  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.088578  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.588533  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:29.088398  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.284015  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:30.285509  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:29.588587  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:30.088826  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:30.588871  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:31.089336  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:31.588959  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:32.088390  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:32.589079  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:33.088948  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:33.589067  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:34.089108  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:34.261304  177307 kubeadm.go:1088] duration metric: took 13.275930767s to wait for elevateKubeSystemPrivileges.
	I1213 00:14:34.261367  177307 kubeadm.go:406] StartCluster complete in 5m12.573209179s
	I1213 00:14:34.261392  177307 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:14:34.261511  177307 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:14:34.264237  177307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:14:34.264668  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:14:34.264951  177307 config.go:182] Loaded profile config "no-preload-143586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:14:34.265065  177307 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:14:34.265128  177307 addons.go:69] Setting storage-provisioner=true in profile "no-preload-143586"
	I1213 00:14:34.265150  177307 addons.go:231] Setting addon storage-provisioner=true in "no-preload-143586"
	W1213 00:14:34.265161  177307 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:14:34.265202  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.265231  177307 addons.go:69] Setting default-storageclass=true in profile "no-preload-143586"
	I1213 00:14:34.265262  177307 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-143586"
	I1213 00:14:34.265606  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.265612  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.265627  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.265628  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.265846  177307 addons.go:69] Setting metrics-server=true in profile "no-preload-143586"
	I1213 00:14:34.265878  177307 addons.go:231] Setting addon metrics-server=true in "no-preload-143586"
	W1213 00:14:34.265890  177307 addons.go:240] addon metrics-server should already be in state true
	I1213 00:14:34.265935  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.266231  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.266277  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.287844  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34161
	I1213 00:14:34.287882  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I1213 00:14:34.287968  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
	I1213 00:14:34.288509  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.288529  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.288811  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.289178  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289197  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289310  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289325  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289335  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289347  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289707  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289713  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289736  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289891  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.290392  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.290398  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.290415  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.290417  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.293696  177307 addons.go:231] Setting addon default-storageclass=true in "no-preload-143586"
	W1213 00:14:34.293725  177307 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:14:34.293756  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.294150  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.294187  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.309103  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I1213 00:14:34.309683  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.310362  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.310387  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.310830  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.311091  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.312755  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37041
	I1213 00:14:34.313192  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.313601  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.313796  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.313814  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.316496  177307 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:14:34.314223  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.316102  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41945
	I1213 00:14:34.318112  177307 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:14:34.318127  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:14:34.318144  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.318260  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.318670  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.318693  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.319401  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.319422  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.319860  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.320080  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.321977  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.323695  177307 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:14:34.322509  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.325025  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:14:34.325037  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:14:34.325053  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.323731  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.325089  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.323250  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.325250  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.325428  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.325563  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.328055  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.328364  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.328386  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.328712  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.328867  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.328980  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.329099  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.339175  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35817
	I1213 00:14:34.339820  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.340300  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.340314  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.340662  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.340821  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.342399  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.342673  177307 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:14:34.342694  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:14:34.342720  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.345475  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.345804  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.345839  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.346062  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.346256  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.346453  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.346622  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.425634  177307 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-143586" context rescaled to 1 replicas
	I1213 00:14:34.425672  177307 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:14:34.427471  177307 out.go:177] * Verifying Kubernetes components...
	I1213 00:14:32.783642  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:34.786810  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:34.428983  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:34.589995  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:14:34.590692  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:14:34.592452  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:14:34.592472  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:14:34.643312  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:14:34.643336  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:14:34.649786  177307 node_ready.go:35] waiting up to 6m0s for node "no-preload-143586" to be "Ready" ...
	I1213 00:14:34.649926  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:14:34.683306  177307 node_ready.go:49] node "no-preload-143586" has status "Ready":"True"
	I1213 00:14:34.683339  177307 node_ready.go:38] duration metric: took 33.525188ms waiting for node "no-preload-143586" to be "Ready" ...
	I1213 00:14:34.683352  177307 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:34.711542  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:14:34.711570  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:14:34.738788  177307 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-8fb8b" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:34.823110  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:14:35.743550  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.153515373s)
	I1213 00:14:35.743618  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.743634  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.743661  177307 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.093703901s)
	I1213 00:14:35.743611  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.152891747s)
	I1213 00:14:35.743699  177307 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1213 00:14:35.743719  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.743732  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.744060  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.744059  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.744088  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.744100  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.744114  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.744114  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.744158  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.744195  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.744209  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.744223  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.745779  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.745829  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.745855  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.745838  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.745797  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.745790  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.757271  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.757292  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.757758  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.757776  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.757787  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:36.114702  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291538738s)
	I1213 00:14:36.114760  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:36.114773  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:36.115132  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:36.115149  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:36.115158  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:36.115168  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:36.115411  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:36.115426  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:36.115436  177307 addons.go:467] Verifying addon metrics-server=true in "no-preload-143586"
	I1213 00:14:36.117975  177307 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1213 00:14:36.119554  177307 addons.go:502] enable addons completed in 1.85448385s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1213 00:14:37.069993  177307 pod_ready.go:102] pod "coredns-76f75df574-8fb8b" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:38.563525  177307 pod_ready.go:92] pod "coredns-76f75df574-8fb8b" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.563551  177307 pod_ready.go:81] duration metric: took 3.824732725s waiting for pod "coredns-76f75df574-8fb8b" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.563561  177307 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hs9rg" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.565949  177307 pod_ready.go:97] error getting pod "coredns-76f75df574-hs9rg" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-hs9rg" not found
	I1213 00:14:38.565976  177307 pod_ready.go:81] duration metric: took 2.409349ms waiting for pod "coredns-76f75df574-hs9rg" in "kube-system" namespace to be "Ready" ...
	E1213 00:14:38.565984  177307 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-hs9rg" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-hs9rg" not found
	I1213 00:14:38.565990  177307 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.571396  177307 pod_ready.go:92] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.571416  177307 pod_ready.go:81] duration metric: took 5.419634ms waiting for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.571424  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.576228  177307 pod_ready.go:92] pod "kube-apiserver-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.576248  177307 pod_ready.go:81] duration metric: took 4.818853ms waiting for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.576256  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.581260  177307 pod_ready.go:92] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.581281  177307 pod_ready.go:81] duration metric: took 5.019621ms waiting for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.581289  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xsdtr" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.760984  177307 pod_ready.go:92] pod "kube-proxy-xsdtr" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.761006  177307 pod_ready.go:81] duration metric: took 179.711484ms waiting for pod "kube-proxy-xsdtr" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.761015  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:39.160713  177307 pod_ready.go:92] pod "kube-scheduler-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:39.160738  177307 pod_ready.go:81] duration metric: took 399.716844ms waiting for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:39.160746  177307 pod_ready.go:38] duration metric: took 4.477382003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:39.160762  177307 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:39.160809  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:39.176747  177307 api_server.go:72] duration metric: took 4.751030848s to wait for apiserver process to appear ...
	I1213 00:14:39.176774  177307 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:39.176791  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:14:39.183395  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 200:
	ok
	I1213 00:14:39.184769  177307 api_server.go:141] control plane version: v1.29.0-rc.2
	I1213 00:14:39.184789  177307 api_server.go:131] duration metric: took 8.009007ms to wait for apiserver health ...
	I1213 00:14:39.184799  177307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:39.364215  177307 system_pods.go:59] 8 kube-system pods found
	I1213 00:14:39.364251  177307 system_pods.go:61] "coredns-76f75df574-8fb8b" [3ca4e237-6a35-4e8f-9731-3f9655eba995] Running
	I1213 00:14:39.364256  177307 system_pods.go:61] "etcd-no-preload-143586" [205757f5-5bd7-416c-af72-6ebf428d7302] Running
	I1213 00:14:39.364260  177307 system_pods.go:61] "kube-apiserver-no-preload-143586" [a479caf2-8ff4-4b78-8d93-e8f672a853b9] Running
	I1213 00:14:39.364265  177307 system_pods.go:61] "kube-controller-manager-no-preload-143586" [4afd5e8a-64b3-4e0c-a723-b6a6bd2445d4] Running
	I1213 00:14:39.364269  177307 system_pods.go:61] "kube-proxy-xsdtr" [23a261a4-17d1-4657-8052-02b71055c850] Running
	I1213 00:14:39.364273  177307 system_pods.go:61] "kube-scheduler-no-preload-143586" [71312243-0ba6-40e0-80a5-c1652b5270e9] Running
	I1213 00:14:39.364280  177307 system_pods.go:61] "metrics-server-57f55c9bc5-q7v45" [1579f5c9-d574-4ab8-9add-e89621b9c203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:39.364284  177307 system_pods.go:61] "storage-provisioner" [400b27cc-1713-4201-8097-3e3fd8004690] Running
	I1213 00:14:39.364292  177307 system_pods.go:74] duration metric: took 179.488069ms to wait for pod list to return data ...
	I1213 00:14:39.364301  177307 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:39.560330  177307 default_sa.go:45] found service account: "default"
	I1213 00:14:39.560364  177307 default_sa.go:55] duration metric: took 196.056049ms for default service account to be created ...
	I1213 00:14:39.560376  177307 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:39.763340  177307 system_pods.go:86] 8 kube-system pods found
	I1213 00:14:39.763384  177307 system_pods.go:89] "coredns-76f75df574-8fb8b" [3ca4e237-6a35-4e8f-9731-3f9655eba995] Running
	I1213 00:14:39.763393  177307 system_pods.go:89] "etcd-no-preload-143586" [205757f5-5bd7-416c-af72-6ebf428d7302] Running
	I1213 00:14:39.763400  177307 system_pods.go:89] "kube-apiserver-no-preload-143586" [a479caf2-8ff4-4b78-8d93-e8f672a853b9] Running
	I1213 00:14:39.763405  177307 system_pods.go:89] "kube-controller-manager-no-preload-143586" [4afd5e8a-64b3-4e0c-a723-b6a6bd2445d4] Running
	I1213 00:14:39.763409  177307 system_pods.go:89] "kube-proxy-xsdtr" [23a261a4-17d1-4657-8052-02b71055c850] Running
	I1213 00:14:39.763414  177307 system_pods.go:89] "kube-scheduler-no-preload-143586" [71312243-0ba6-40e0-80a5-c1652b5270e9] Running
	I1213 00:14:39.763426  177307 system_pods.go:89] "metrics-server-57f55c9bc5-q7v45" [1579f5c9-d574-4ab8-9add-e89621b9c203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:39.763434  177307 system_pods.go:89] "storage-provisioner" [400b27cc-1713-4201-8097-3e3fd8004690] Running
	I1213 00:14:39.763449  177307 system_pods.go:126] duration metric: took 203.065345ms to wait for k8s-apps to be running ...
	I1213 00:14:39.763458  177307 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:39.763517  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:39.783072  177307 system_svc.go:56] duration metric: took 19.601725ms WaitForService to wait for kubelet.
	I1213 00:14:39.783120  177307 kubeadm.go:581] duration metric: took 5.357406192s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:39.783147  177307 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:39.962475  177307 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:39.962501  177307 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:39.962511  177307 node_conditions.go:105] duration metric: took 179.359327ms to run NodePressure ...
	I1213 00:14:39.962524  177307 start.go:228] waiting for startup goroutines ...
	I1213 00:14:39.962532  177307 start.go:233] waiting for cluster config update ...
	I1213 00:14:39.962544  177307 start.go:242] writing updated cluster config ...
	I1213 00:14:39.962816  177307 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:40.016206  177307 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1213 00:14:40.018375  177307 out.go:177] * Done! kubectl is now configured to use "no-preload-143586" cluster and "default" namespace by default
	I1213 00:14:37.286105  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:39.786060  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:42.285678  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:44.784213  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:47.285680  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:49.783428  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:51.785923  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:54.283780  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:56.783343  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:59.283053  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:15:00.976984  176813 pod_ready.go:81] duration metric: took 4m0.000041493s waiting for pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace to be "Ready" ...
	E1213 00:15:00.977016  176813 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:15:00.977037  176813 pod_ready.go:38] duration metric: took 4m1.19985839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:00.977064  176813 kubeadm.go:640] restartCluster took 5m6.659231001s
	W1213 00:15:00.977141  176813 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:15:00.977178  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:15:07.653665  176813 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.676456274s)
	I1213 00:15:07.653745  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:15:07.673981  176813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:15:07.688018  176813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:15:07.699196  176813 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:15:07.699244  176813 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1213 00:15:07.761890  176813 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1213 00:15:07.762010  176813 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:15:07.921068  176813 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:15:07.921220  176813 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:15:07.921360  176813 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:15:08.151937  176813 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:15:08.152063  176813 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:15:08.159296  176813 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1213 00:15:08.285060  176813 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:15:08.286903  176813 out.go:204]   - Generating certificates and keys ...
	I1213 00:15:08.287074  176813 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:15:08.287174  176813 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:15:08.290235  176813 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:15:08.290397  176813 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:15:08.290878  176813 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:15:08.291179  176813 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:15:08.291663  176813 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:15:08.292342  176813 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:15:08.292822  176813 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:15:08.293259  176813 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:15:08.293339  176813 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:15:08.293429  176813 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:15:08.526145  176813 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:15:08.586842  176813 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:15:08.636575  176813 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:15:08.706448  176813 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:15:08.710760  176813 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:15:08.713664  176813 out.go:204]   - Booting up control plane ...
	I1213 00:15:08.713773  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:15:08.718431  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:15:08.719490  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:15:08.720327  176813 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:15:08.722707  176813 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:15:19.226839  176813 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503804 seconds
	I1213 00:15:19.227005  176813 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:15:19.245054  176813 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:15:19.773910  176813 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:15:19.774100  176813 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-508612 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1213 00:15:20.284136  176813 kubeadm.go:322] [bootstrap-token] Using token: lgq05i.maaa534t8w734gvq
	I1213 00:15:20.286042  176813 out.go:204]   - Configuring RBAC rules ...
	I1213 00:15:20.286186  176813 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:15:20.297875  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:15:20.305644  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:15:20.314089  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:15:20.319091  176813 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:15:20.387872  176813 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:15:20.733546  176813 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:15:20.735072  176813 kubeadm.go:322] 
	I1213 00:15:20.735157  176813 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:15:20.735168  176813 kubeadm.go:322] 
	I1213 00:15:20.735280  176813 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:15:20.735291  176813 kubeadm.go:322] 
	I1213 00:15:20.735314  176813 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:15:20.735389  176813 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:15:20.735451  176813 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:15:20.735459  176813 kubeadm.go:322] 
	I1213 00:15:20.735517  176813 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:15:20.735602  176813 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:15:20.735660  176813 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:15:20.735666  176813 kubeadm.go:322] 
	I1213 00:15:20.735757  176813 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1213 00:15:20.735867  176813 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:15:20.735889  176813 kubeadm.go:322] 
	I1213 00:15:20.736036  176813 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lgq05i.maaa534t8w734gvq \
	I1213 00:15:20.736152  176813 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:15:20.736223  176813 kubeadm.go:322]     --control-plane 	  
	I1213 00:15:20.736240  176813 kubeadm.go:322] 
	I1213 00:15:20.736348  176813 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:15:20.736357  176813 kubeadm.go:322] 
	I1213 00:15:20.736472  176813 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lgq05i.maaa534t8w734gvq \
	I1213 00:15:20.736596  176813 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:15:20.737307  176813 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:15:20.737332  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:15:20.737340  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:15:20.739085  176813 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:15:20.740295  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:15:20.749618  176813 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:15:20.767876  176813 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:15:20.767933  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:20.767984  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=old-k8s-version-508612 minikube.k8s.io/updated_at=2023_12_13T00_15_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.051677  176813 ops.go:34] apiserver oom_adj: -16
	I1213 00:15:21.051709  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.148546  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.741424  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:22.240885  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:22.741651  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:23.241662  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:23.741098  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:24.241530  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:24.741035  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:25.241391  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:25.741004  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:26.241402  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:26.741333  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:27.241828  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:27.741151  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:28.240933  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:28.741661  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:29.241431  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:29.741667  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:30.241070  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:30.741117  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:31.241355  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:31.741697  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:32.241779  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:32.741165  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:33.241739  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:33.741499  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:34.241477  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:34.740804  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:35.241596  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:35.374344  176813 kubeadm.go:1088] duration metric: took 14.606462065s to wait for elevateKubeSystemPrivileges.
	I1213 00:15:35.374388  176813 kubeadm.go:406] StartCluster complete in 5m41.120911791s
	I1213 00:15:35.374416  176813 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:15:35.374522  176813 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:15:35.376587  176813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:15:35.376829  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:15:35.376896  176813 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:15:35.376998  176813 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377018  176813 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377026  176813 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-508612"
	W1213 00:15:35.377036  176813 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:15:35.377038  176813 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377075  176813 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-508612"
	W1213 00:15:35.377089  176813 addons.go:240] addon metrics-server should already be in state true
	I1213 00:15:35.377107  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.377140  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.377536  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.377569  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.377577  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.377603  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.377036  176813 config.go:182] Loaded profile config "old-k8s-version-508612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1213 00:15:35.377038  176813 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-508612"
	I1213 00:15:35.378232  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.378269  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.396758  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I1213 00:15:35.397242  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.397563  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
	I1213 00:15:35.397732  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I1213 00:15:35.398240  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.398249  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.398768  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.398789  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.398927  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.398944  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.399039  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.399048  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.399144  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399485  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399506  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399699  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.399783  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.399822  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.400014  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.400052  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.403424  176813 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-508612"
	W1213 00:15:35.403445  176813 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:15:35.403470  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.403784  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.403809  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.419742  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I1213 00:15:35.419763  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I1213 00:15:35.420351  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.420378  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.420912  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.420927  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.421042  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.421062  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.421403  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.421450  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.421588  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.421633  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.422473  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I1213 00:15:35.423216  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.423818  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.423875  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.423890  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.426328  176813 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:15:35.424310  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.424522  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.428333  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:15:35.428351  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:15:35.428377  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.430256  176813 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:15:35.428950  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.430439  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.431959  176813 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:15:35.431260  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.431816  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.432011  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.431977  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:15:35.432031  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.432047  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.432199  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.432359  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.432587  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.434239  176813 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-508612" context rescaled to 1 replicas
	I1213 00:15:35.434275  176813 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:15:35.435769  176813 out.go:177] * Verifying Kubernetes components...
	I1213 00:15:35.437082  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:15:35.434982  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.435627  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.437148  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.437186  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.437343  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.437515  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.437646  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.450115  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I1213 00:15:35.450582  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.451077  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.451104  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.451548  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.451822  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.453721  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.454034  176813 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:15:35.454052  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:15:35.454072  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.456976  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.457326  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.457351  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.457530  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.457709  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.457859  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.458008  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.599631  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:15:35.607268  176813 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-508612" to be "Ready" ...
	I1213 00:15:35.607407  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:15:35.627686  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:15:35.627720  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:15:35.641865  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:15:35.653972  176813 node_ready.go:49] node "old-k8s-version-508612" has status "Ready":"True"
	I1213 00:15:35.654008  176813 node_ready.go:38] duration metric: took 46.699606ms waiting for node "old-k8s-version-508612" to be "Ready" ...
	I1213 00:15:35.654022  176813 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:35.701904  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:15:35.701939  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:15:35.722752  176813 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:35.779684  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:15:35.779719  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:15:35.871071  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:15:36.486377  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486409  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486428  176813 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1213 00:15:36.486495  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486513  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486715  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.486725  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.486734  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486741  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486816  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.486826  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.486834  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486843  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.487015  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.487022  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.487048  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.487156  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.487172  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.487186  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.535004  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.535026  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.535335  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.535394  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.535407  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.671282  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.671308  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.671649  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.671719  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.671739  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.671758  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.671771  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.672067  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.672091  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.672092  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.672102  176813 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-508612"
	I1213 00:15:36.673881  176813 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1213 00:15:36.675200  176813 addons.go:502] enable addons completed in 1.298322525s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1213 00:15:37.860212  176813 pod_ready.go:102] pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace has status "Ready":"False"
	I1213 00:15:40.350347  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace has status "Ready":"True"
	I1213 00:15:40.350370  176813 pod_ready.go:81] duration metric: took 4.627584432s waiting for pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.350383  176813 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wz29m" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.356218  176813 pod_ready.go:92] pod "kube-proxy-wz29m" in "kube-system" namespace has status "Ready":"True"
	I1213 00:15:40.356240  176813 pod_ready.go:81] duration metric: took 5.84816ms waiting for pod "kube-proxy-wz29m" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.356252  176813 pod_ready.go:38] duration metric: took 4.702215033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:40.356270  176813 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:15:40.356324  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:15:40.372391  176813 api_server.go:72] duration metric: took 4.938079614s to wait for apiserver process to appear ...
	I1213 00:15:40.372424  176813 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:15:40.372459  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:15:40.378882  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1213 00:15:40.379747  176813 api_server.go:141] control plane version: v1.16.0
	I1213 00:15:40.379770  176813 api_server.go:131] duration metric: took 7.338199ms to wait for apiserver health ...
	I1213 00:15:40.379780  176813 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:15:40.383090  176813 system_pods.go:59] 4 kube-system pods found
	I1213 00:15:40.383110  176813 system_pods.go:61] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.383115  176813 system_pods.go:61] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.383121  176813 system_pods.go:61] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.383126  176813 system_pods.go:61] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.383133  176813 system_pods.go:74] duration metric: took 3.346988ms to wait for pod list to return data ...
	I1213 00:15:40.383140  176813 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:15:40.385822  176813 default_sa.go:45] found service account: "default"
	I1213 00:15:40.385843  176813 default_sa.go:55] duration metric: took 2.696485ms for default service account to be created ...
	I1213 00:15:40.385851  176813 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:15:40.390030  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.390056  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.390061  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.390068  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.390072  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.390094  176813 retry.go:31] will retry after 206.30305ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:40.602546  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.602577  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.602582  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.602589  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.602593  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.602611  176813 retry.go:31] will retry after 375.148566ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:40.987598  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.987626  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.987631  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.987639  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.987645  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.987663  176813 retry.go:31] will retry after 354.607581ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:41.347931  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:41.347965  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:41.347974  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:41.347984  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:41.347992  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:41.348012  176813 retry.go:31] will retry after 443.179207ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:41.796661  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:41.796687  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:41.796692  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:41.796711  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:41.796716  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:41.796733  176813 retry.go:31] will retry after 468.875458ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:42.271565  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:42.271591  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:42.271596  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:42.271603  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:42.271608  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:42.271624  176813 retry.go:31] will retry after 696.629881ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:42.974971  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:42.974997  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:42.975003  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:42.975009  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:42.975015  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:42.975031  176813 retry.go:31] will retry after 830.83436ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:43.810755  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:43.810784  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:43.810792  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:43.810802  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:43.810808  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:43.810830  176813 retry.go:31] will retry after 1.429308487s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:45.245813  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:45.245844  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:45.245852  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:45.245862  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:45.245867  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:45.245887  176813 retry.go:31] will retry after 1.715356562s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:46.966484  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:46.966512  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:46.966517  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:46.966523  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:46.966529  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:46.966546  176813 retry.go:31] will retry after 2.125852813s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:49.097419  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:49.097450  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:49.097460  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:49.097472  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:49.097478  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:49.097496  176813 retry.go:31] will retry after 2.902427415s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:52.005062  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:52.005097  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:52.005106  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:52.005119  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:52.005128  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:52.005154  176813 retry.go:31] will retry after 3.461524498s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:55.471450  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:55.471474  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:55.471480  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:55.471487  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:55.471492  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:55.471509  176813 retry.go:31] will retry after 2.969353102s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:58.445285  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:58.445316  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:58.445324  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:58.445334  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:58.445341  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:58.445363  176813 retry.go:31] will retry after 3.938751371s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:02.389811  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:02.389839  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:02.389845  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:02.389851  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:02.389856  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:02.389873  176813 retry.go:31] will retry after 5.281550171s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:07.676759  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:07.676786  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:07.676791  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:07.676798  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:07.676802  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:07.676820  176813 retry.go:31] will retry after 8.193775139s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:15.875917  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:15.875946  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:15.875951  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:15.875958  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:15.875962  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:15.875980  176813 retry.go:31] will retry after 8.515960159s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:24.397972  176813 system_pods.go:86] 5 kube-system pods found
	I1213 00:16:24.398006  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:24.398014  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:24.398021  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:24.398032  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:24.398039  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:24.398060  176813 retry.go:31] will retry after 10.707543157s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:35.112639  176813 system_pods.go:86] 7 kube-system pods found
	I1213 00:16:35.112667  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:35.112672  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:35.112677  176813 system_pods.go:89] "kube-controller-manager-old-k8s-version-508612" [c6a195a2-e710-4791-b0a3-32618d3c752c] Running
	I1213 00:16:35.112681  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:35.112685  176813 system_pods.go:89] "kube-scheduler-old-k8s-version-508612" [6ee728bb-d81b-43f2-8aa3-4848871a8f41] Running
	I1213 00:16:35.112691  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:35.112696  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:35.112712  176813 retry.go:31] will retry after 13.429366805s: missing components: kube-apiserver
	I1213 00:16:48.550673  176813 system_pods.go:86] 8 kube-system pods found
	I1213 00:16:48.550704  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:48.550710  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:48.550714  176813 system_pods.go:89] "kube-apiserver-old-k8s-version-508612" [1473501b-d17d-4bbb-a61a-1d244f54f70c] Running
	I1213 00:16:48.550718  176813 system_pods.go:89] "kube-controller-manager-old-k8s-version-508612" [c6a195a2-e710-4791-b0a3-32618d3c752c] Running
	I1213 00:16:48.550722  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:48.550726  176813 system_pods.go:89] "kube-scheduler-old-k8s-version-508612" [6ee728bb-d81b-43f2-8aa3-4848871a8f41] Running
	I1213 00:16:48.550733  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:48.550737  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:48.550747  176813 system_pods.go:126] duration metric: took 1m8.164889078s to wait for k8s-apps to be running ...
	I1213 00:16:48.550756  176813 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:16:48.550811  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:16:48.568833  176813 system_svc.go:56] duration metric: took 18.062353ms WaitForService to wait for kubelet.
	I1213 00:16:48.568876  176813 kubeadm.go:581] duration metric: took 1m13.134572871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:16:48.568901  176813 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:16:48.573103  176813 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:16:48.573128  176813 node_conditions.go:123] node cpu capacity is 2
	I1213 00:16:48.573137  176813 node_conditions.go:105] duration metric: took 4.231035ms to run NodePressure ...
	I1213 00:16:48.573148  176813 start.go:228] waiting for startup goroutines ...
	I1213 00:16:48.573154  176813 start.go:233] waiting for cluster config update ...
	I1213 00:16:48.573163  176813 start.go:242] writing updated cluster config ...
	I1213 00:16:48.573436  176813 ssh_runner.go:195] Run: rm -f paused
	I1213 00:16:48.627109  176813 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1213 00:16:48.628688  176813 out.go:177] 
	W1213 00:16:48.630154  176813 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1213 00:16:48.631498  176813 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1213 00:16:48.633089  176813 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-508612" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-13 00:09:34 UTC, ends at Wed 2023-12-13 00:25:50 UTC. --
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.373517908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427150373500963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=963ce0f5-6807-4bb0-b5b1-ac25979ec224 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.374217855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6c6c9b3e-3435-4bb9-baac-ec5ee6aa3c22 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.374295751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6c6c9b3e-3435-4bb9-baac-ec5ee6aa3c22 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.374472042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebfeec5f1c53776f7879ac4df43b384de96608b731f6f9214de46f78addb5bea,PodSandboxId:a4317355d619c71c83818eab8136e0f00a4d80c92f98f32841a12cf4538c0239,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702426537962582180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wz29m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efb7496-51ae-454f-88e9-cc8b6e9680c2,},Annotations:map[string]string{io.kubernetes.container.hash: f9a8d0ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f9cca46c1cbbb355770d4ebb6032ce196df559af73e92da021f845d1b871df,PodSandboxId:9bb6d310d2d9d6ec22629660a6052619445b4464d96daa37dca3eb27a93b3020,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426537771103303,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6520d208-a69d-43ba-b107-1767102a62d4,},Annotations:map[string]string{io.kubernetes.container.hash: e9475179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ca2665660b00c673cfb2b2d0ae3fee983536ad6d8679a7811ffe8a3fc52515,PodSandboxId:aebd7a876fff617e2fbf3ba11fc2efe19cb6904afd9296d9c41cd9aa1dafdd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702426537209106347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4xsr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69174686-a36a-4b06-b723-0df832046815,},Annotations:map[string]string{io.kubernetes.container.hash: 479e2a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654928044f339493dc40cf5c825602a04575e244b1ec8da0467d52b751450877,PodSandboxId:864c02a6bf69dbf4acadee735fce57cd716c630b38e9a9b0861276376eaad3c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702426511386648716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b127afa6806b59f84e6ab3b018476fc4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 56252b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b73166520a4ebe05cdc279ad982fecc97db9e9240b2f71bb385a184ccfc76b,PodSandboxId:ca1a2c9e6ddb9483cc36af2ae1ee7a2f864e5272b86b2a08e0bd33aa4dbc5a17,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702426510108286185,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c36af79b91fb29ce09d38d1f84d66aa4d77969d6e4f3b59dbc1ba16417729a9,PodSandboxId:51ddc1aca59e145f83e8f5b5b31e5a1520017210f22b92f9f42f37df213936b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702426510077288631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d51289d3bc2e6fd6537f4477cf2fcd88c09e83d9e349289361f2b60f34e3a92,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702426509241347655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdd6940df379f70b0e8fb3128bfee7c86043aca24780c0d05a88d4dcad7e1cf3,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702426207108147850,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6c6c9b3e-3435-4bb9-baac-ec5ee6aa3c22 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.418756966Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b0f729d1-2ca2-46af-8727-9f600df7c0c9 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.418878943Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b0f729d1-2ca2-46af-8727-9f600df7c0c9 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.420421245Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7d3db0a4-affc-4006-bdcb-c622c00cde24 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.420838768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427150420821636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=7d3db0a4-affc-4006-bdcb-c622c00cde24 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.421585796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=43ce3d66-a432-43a4-933e-44969d5433c8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.421657968Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=43ce3d66-a432-43a4-933e-44969d5433c8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.421843857Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebfeec5f1c53776f7879ac4df43b384de96608b731f6f9214de46f78addb5bea,PodSandboxId:a4317355d619c71c83818eab8136e0f00a4d80c92f98f32841a12cf4538c0239,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702426537962582180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wz29m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efb7496-51ae-454f-88e9-cc8b6e9680c2,},Annotations:map[string]string{io.kubernetes.container.hash: f9a8d0ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f9cca46c1cbbb355770d4ebb6032ce196df559af73e92da021f845d1b871df,PodSandboxId:9bb6d310d2d9d6ec22629660a6052619445b4464d96daa37dca3eb27a93b3020,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426537771103303,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6520d208-a69d-43ba-b107-1767102a62d4,},Annotations:map[string]string{io.kubernetes.container.hash: e9475179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ca2665660b00c673cfb2b2d0ae3fee983536ad6d8679a7811ffe8a3fc52515,PodSandboxId:aebd7a876fff617e2fbf3ba11fc2efe19cb6904afd9296d9c41cd9aa1dafdd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702426537209106347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4xsr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69174686-a36a-4b06-b723-0df832046815,},Annotations:map[string]string{io.kubernetes.container.hash: 479e2a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654928044f339493dc40cf5c825602a04575e244b1ec8da0467d52b751450877,PodSandboxId:864c02a6bf69dbf4acadee735fce57cd716c630b38e9a9b0861276376eaad3c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702426511386648716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b127afa6806b59f84e6ab3b018476fc4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 56252b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b73166520a4ebe05cdc279ad982fecc97db9e9240b2f71bb385a184ccfc76b,PodSandboxId:ca1a2c9e6ddb9483cc36af2ae1ee7a2f864e5272b86b2a08e0bd33aa4dbc5a17,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702426510108286185,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c36af79b91fb29ce09d38d1f84d66aa4d77969d6e4f3b59dbc1ba16417729a9,PodSandboxId:51ddc1aca59e145f83e8f5b5b31e5a1520017210f22b92f9f42f37df213936b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702426510077288631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d51289d3bc2e6fd6537f4477cf2fcd88c09e83d9e349289361f2b60f34e3a92,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702426509241347655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdd6940df379f70b0e8fb3128bfee7c86043aca24780c0d05a88d4dcad7e1cf3,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702426207108147850,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=43ce3d66-a432-43a4-933e-44969d5433c8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.464148614Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=74359000-4e63-43b0-9eb8-8568a03f9b9c name=/runtime.v1.RuntimeService/Version
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.464236737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=74359000-4e63-43b0-9eb8-8568a03f9b9c name=/runtime.v1.RuntimeService/Version
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.465241692Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8be26da3-de44-42bb-9a23-6ce31f7f3116 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.465781373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427150465758226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=8be26da3-de44-42bb-9a23-6ce31f7f3116 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.466361623Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2cafe5f3-f80c-427e-8b46-32951d7ceb7d name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.466402625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2cafe5f3-f80c-427e-8b46-32951d7ceb7d name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.466585773Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebfeec5f1c53776f7879ac4df43b384de96608b731f6f9214de46f78addb5bea,PodSandboxId:a4317355d619c71c83818eab8136e0f00a4d80c92f98f32841a12cf4538c0239,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702426537962582180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wz29m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efb7496-51ae-454f-88e9-cc8b6e9680c2,},Annotations:map[string]string{io.kubernetes.container.hash: f9a8d0ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f9cca46c1cbbb355770d4ebb6032ce196df559af73e92da021f845d1b871df,PodSandboxId:9bb6d310d2d9d6ec22629660a6052619445b4464d96daa37dca3eb27a93b3020,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426537771103303,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6520d208-a69d-43ba-b107-1767102a62d4,},Annotations:map[string]string{io.kubernetes.container.hash: e9475179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ca2665660b00c673cfb2b2d0ae3fee983536ad6d8679a7811ffe8a3fc52515,PodSandboxId:aebd7a876fff617e2fbf3ba11fc2efe19cb6904afd9296d9c41cd9aa1dafdd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702426537209106347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4xsr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69174686-a36a-4b06-b723-0df832046815,},Annotations:map[string]string{io.kubernetes.container.hash: 479e2a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654928044f339493dc40cf5c825602a04575e244b1ec8da0467d52b751450877,PodSandboxId:864c02a6bf69dbf4acadee735fce57cd716c630b38e9a9b0861276376eaad3c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702426511386648716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b127afa6806b59f84e6ab3b018476fc4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 56252b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b73166520a4ebe05cdc279ad982fecc97db9e9240b2f71bb385a184ccfc76b,PodSandboxId:ca1a2c9e6ddb9483cc36af2ae1ee7a2f864e5272b86b2a08e0bd33aa4dbc5a17,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702426510108286185,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c36af79b91fb29ce09d38d1f84d66aa4d77969d6e4f3b59dbc1ba16417729a9,PodSandboxId:51ddc1aca59e145f83e8f5b5b31e5a1520017210f22b92f9f42f37df213936b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702426510077288631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d51289d3bc2e6fd6537f4477cf2fcd88c09e83d9e349289361f2b60f34e3a92,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702426509241347655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdd6940df379f70b0e8fb3128bfee7c86043aca24780c0d05a88d4dcad7e1cf3,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702426207108147850,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2cafe5f3-f80c-427e-8b46-32951d7ceb7d name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.506603665Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5ae1abab-4d2e-4060-913b-77069ee08347 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.506691600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5ae1abab-4d2e-4060-913b-77069ee08347 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.508421697Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b37a6626-261c-422a-985f-29e41efd9dcd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.508815909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427150508802769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=b37a6626-261c-422a-985f-29e41efd9dcd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.509461081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b127cfdb-2c13-4698-99b2-53f5d3e180f2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.509533405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b127cfdb-2c13-4698-99b2-53f5d3e180f2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:25:50 old-k8s-version-508612 crio[716]: time="2023-12-13 00:25:50.509713685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebfeec5f1c53776f7879ac4df43b384de96608b731f6f9214de46f78addb5bea,PodSandboxId:a4317355d619c71c83818eab8136e0f00a4d80c92f98f32841a12cf4538c0239,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702426537962582180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wz29m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efb7496-51ae-454f-88e9-cc8b6e9680c2,},Annotations:map[string]string{io.kubernetes.container.hash: f9a8d0ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f9cca46c1cbbb355770d4ebb6032ce196df559af73e92da021f845d1b871df,PodSandboxId:9bb6d310d2d9d6ec22629660a6052619445b4464d96daa37dca3eb27a93b3020,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426537771103303,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6520d208-a69d-43ba-b107-1767102a62d4,},Annotations:map[string]string{io.kubernetes.container.hash: e9475179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ca2665660b00c673cfb2b2d0ae3fee983536ad6d8679a7811ffe8a3fc52515,PodSandboxId:aebd7a876fff617e2fbf3ba11fc2efe19cb6904afd9296d9c41cd9aa1dafdd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702426537209106347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4xsr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69174686-a36a-4b06-b723-0df832046815,},Annotations:map[string]string{io.kubernetes.container.hash: 479e2a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654928044f339493dc40cf5c825602a04575e244b1ec8da0467d52b751450877,PodSandboxId:864c02a6bf69dbf4acadee735fce57cd716c630b38e9a9b0861276376eaad3c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702426511386648716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b127afa6806b59f84e6ab3b018476fc4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 56252b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b73166520a4ebe05cdc279ad982fecc97db9e9240b2f71bb385a184ccfc76b,PodSandboxId:ca1a2c9e6ddb9483cc36af2ae1ee7a2f864e5272b86b2a08e0bd33aa4dbc5a17,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702426510108286185,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c36af79b91fb29ce09d38d1f84d66aa4d77969d6e4f3b59dbc1ba16417729a9,PodSandboxId:51ddc1aca59e145f83e8f5b5b31e5a1520017210f22b92f9f42f37df213936b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702426510077288631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d51289d3bc2e6fd6537f4477cf2fcd88c09e83d9e349289361f2b60f34e3a92,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702426509241347655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdd6940df379f70b0e8fb3128bfee7c86043aca24780c0d05a88d4dcad7e1cf3,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702426207108147850,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b127cfdb-2c13-4698-99b2-53f5d3e180f2 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ebfeec5f1c537       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   a4317355d619c       kube-proxy-wz29m
	b7f9cca46c1cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   9bb6d310d2d9d       storage-provisioner
	a1ca2665660b0       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   aebd7a876fff6       coredns-5644d7b6d9-4xsr7
	654928044f339       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   864c02a6bf69d       etcd-old-k8s-version-508612
	a1b73166520a4       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   ca1a2c9e6ddb9       kube-controller-manager-old-k8s-version-508612
	3c36af79b91fb       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   51ddc1aca59e1       kube-scheduler-old-k8s-version-508612
	7d51289d3bc2e       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            1                   06b27d49cab09       kube-apiserver-old-k8s-version-508612
	fdd6940df379f       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   15 minutes ago      Exited              kube-apiserver            0                   06b27d49cab09       kube-apiserver-old-k8s-version-508612
	
	* 
	* ==> coredns [a1ca2665660b00c673cfb2b2d0ae3fee983536ad6d8679a7811ffe8a3fc52515] <==
	* .:53
	2023-12-13T00:15:37.549Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2023-12-13T00:15:37.549Z [INFO] CoreDNS-1.6.2
	2023-12-13T00:15:37.549Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-12-13T00:15:37.564Z [INFO] 127.0.0.1:41748 - 8538 "HINFO IN 2421315440976780902.6049602843531883062. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013914876s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-508612
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-508612
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=old-k8s-version-508612
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_13T00_15_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Dec 2023 00:15:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Dec 2023 00:25:16 +0000   Wed, 13 Dec 2023 00:15:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Dec 2023 00:25:16 +0000   Wed, 13 Dec 2023 00:15:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Dec 2023 00:25:16 +0000   Wed, 13 Dec 2023 00:15:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Dec 2023 00:25:16 +0000   Wed, 13 Dec 2023 00:15:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    old-k8s-version-508612
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 dbb494d4ff9248d69186027f329440dc
	 System UUID:                dbb494d4-ff92-48d6-9186-027f329440dc
	 Boot ID:                    bec660e6-c313-4c0b-ad4b-987009402d14
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-4xsr7                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-508612                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m33s
	  kube-system                kube-apiserver-old-k8s-version-508612             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                kube-controller-manager-old-k8s-version-508612    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                kube-proxy-wz29m                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-508612             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                metrics-server-74d5856cc6-xcqf5                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-508612     Node old-k8s-version-508612 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-508612     Node old-k8s-version-508612 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-508612     Node old-k8s-version-508612 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-508612  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec13 00:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000002] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070237] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.591315] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.523909] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153431] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.968635] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.264735] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.122499] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.158630] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.142308] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.264716] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[Dec13 00:10] systemd-fstab-generator[1028]: Ignoring "noauto" for root device
	[  +0.469196] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.619889] kauditd_printk_skb: 13 callbacks suppressed
	[  +7.157409] kauditd_printk_skb: 2 callbacks suppressed
	[Dec13 00:15] systemd-fstab-generator[3070]: Ignoring "noauto" for root device
	[  +0.669740] kauditd_printk_skb: 6 callbacks suppressed
	[Dec13 00:16] kauditd_printk_skb: 4 callbacks suppressed
	
	* 
	* ==> etcd [654928044f339493dc40cf5c825602a04575e244b1ec8da0467d52b751450877] <==
	* 2023-12-13 00:15:11.494483 I | raft: d9e0442f914d2c09 became follower at term 0
	2023-12-13 00:15:11.494495 I | raft: newRaft d9e0442f914d2c09 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-12-13 00:15:11.494499 I | raft: d9e0442f914d2c09 became follower at term 1
	2023-12-13 00:15:11.503725 W | auth: simple token is not cryptographically signed
	2023-12-13 00:15:11.508316 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-12-13 00:15:11.509473 I | etcdserver: d9e0442f914d2c09 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-13 00:15:11.510125 I | etcdserver/membership: added member d9e0442f914d2c09 [https://192.168.39.70:2380] to cluster b9ca18127a3e3182
	2023-12-13 00:15:11.511082 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-13 00:15:11.511255 I | embed: listening for metrics on http://192.168.39.70:2381
	2023-12-13 00:15:11.511461 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-13 00:15:11.995500 I | raft: d9e0442f914d2c09 is starting a new election at term 1
	2023-12-13 00:15:11.995843 I | raft: d9e0442f914d2c09 became candidate at term 2
	2023-12-13 00:15:11.995953 I | raft: d9e0442f914d2c09 received MsgVoteResp from d9e0442f914d2c09 at term 2
	2023-12-13 00:15:11.995982 I | raft: d9e0442f914d2c09 became leader at term 2
	2023-12-13 00:15:11.996123 I | raft: raft.node: d9e0442f914d2c09 elected leader d9e0442f914d2c09 at term 2
	2023-12-13 00:15:11.996570 I | etcdserver: setting up the initial cluster version to 3.3
	2023-12-13 00:15:11.998124 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-12-13 00:15:11.998184 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-13 00:15:11.998208 I | etcdserver: published {Name:old-k8s-version-508612 ClientURLs:[https://192.168.39.70:2379]} to cluster b9ca18127a3e3182
	2023-12-13 00:15:11.998273 I | embed: ready to serve client requests
	2023-12-13 00:15:11.998975 I | embed: ready to serve client requests
	2023-12-13 00:15:11.999804 I | embed: serving client requests on 192.168.39.70:2379
	2023-12-13 00:15:12.001521 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-13 00:25:12.323422 I | mvcc: store.index: compact 661
	2023-12-13 00:25:12.325592 I | mvcc: finished scheduled compaction at 661 (took 1.579137ms)
	
	* 
	* ==> kernel <==
	*  00:25:50 up 16 min,  0 users,  load average: 0.04, 0.20, 0.18
	Linux old-k8s-version-508612 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7d51289d3bc2e6fd6537f4477cf2fcd88c09e83d9e349289361f2b60f34e3a92] <==
	* I1213 00:18:38.456407       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1213 00:18:38.456539       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 00:18:38.456614       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:18:38.456621       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:20:16.608647       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1213 00:20:16.608770       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 00:20:16.608841       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:20:16.608848       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:21:16.609237       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1213 00:21:16.609369       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 00:21:16.609433       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:21:16.609448       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:23:16.610518       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1213 00:23:16.610713       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 00:23:16.610811       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:23:16.610827       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:25:16.612276       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1213 00:25:16.612387       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 00:25:16.612446       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:25:16.612456       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-apiserver [fdd6940df379f70b0e8fb3128bfee7c86043aca24780c0d05a88d4dcad7e1cf3] <==
	* W1213 00:15:06.352611       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.352673       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.352705       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.352731       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.352793       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.352740       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.352859       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353006       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353275       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353578       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353645       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353667       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353725       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353746       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353767       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354318       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354382       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354409       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354458       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354481       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354504       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354529       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354584       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:07.634820       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:07.641362       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [a1b73166520a4ebe05cdc279ad982fecc97db9e9240b2f71bb385a184ccfc76b] <==
	* E1213 00:19:37.620174       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:19:51.611793       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:20:07.872346       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:20:23.613880       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:20:38.125233       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:20:55.616214       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:21:08.377488       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:21:27.618627       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:21:38.629595       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:21:59.620184       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:22:08.881648       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:22:31.622357       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:22:39.134265       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:23:03.624451       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:23:09.386784       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:23:35.626936       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:23:39.638871       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:24:07.629491       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:24:09.890951       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:24:39.631502       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:24:40.143269       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1213 00:25:10.395151       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:25:11.633436       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:25:40.648195       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:25:43.635506       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [ebfeec5f1c53776f7879ac4df43b384de96608b731f6f9214de46f78addb5bea] <==
	* W1213 00:15:38.194663       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1213 00:15:38.202616       1 node.go:135] Successfully retrieved node IP: 192.168.39.70
	I1213 00:15:38.202687       1 server_others.go:149] Using iptables Proxier.
	I1213 00:15:38.203191       1 server.go:529] Version: v1.16.0
	I1213 00:15:38.204761       1 config.go:131] Starting endpoints config controller
	I1213 00:15:38.204825       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1213 00:15:38.204843       1 config.go:313] Starting service config controller
	I1213 00:15:38.204853       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1213 00:15:38.305231       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1213 00:15:38.305573       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [3c36af79b91fb29ce09d38d1f84d66aa4d77969d6e4f3b59dbc1ba16417729a9] <==
	* I1213 00:15:15.593714       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1213 00:15:15.605612       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1213 00:15:15.648369       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 00:15:15.648477       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 00:15:15.648511       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 00:15:15.648544       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 00:15:15.648574       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 00:15:15.650725       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 00:15:15.650799       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1213 00:15:15.650840       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1213 00:15:15.650890       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 00:15:15.651308       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 00:15:15.651506       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1213 00:15:16.651497       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 00:15:16.653129       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 00:15:16.653870       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 00:15:16.654679       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 00:15:16.657128       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 00:15:16.659979       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 00:15:16.660869       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1213 00:15:16.661987       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 00:15:16.665279       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 00:15:16.666769       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1213 00:15:16.666840       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1213 00:15:35.269373       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-13 00:09:34 UTC, ends at Wed 2023-12-13 00:25:51 UTC. --
	Dec 13 00:21:12 old-k8s-version-508612 kubelet[3076]: E1213 00:21:12.799279    3076 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 13 00:21:12 old-k8s-version-508612 kubelet[3076]: E1213 00:21:12.799310    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 13 00:21:23 old-k8s-version-508612 kubelet[3076]: E1213 00:21:23.787891    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:21:35 old-k8s-version-508612 kubelet[3076]: E1213 00:21:35.787084    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:21:47 old-k8s-version-508612 kubelet[3076]: E1213 00:21:47.786889    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:21:58 old-k8s-version-508612 kubelet[3076]: E1213 00:21:58.787998    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:22:10 old-k8s-version-508612 kubelet[3076]: E1213 00:22:10.788239    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:22:21 old-k8s-version-508612 kubelet[3076]: E1213 00:22:21.787469    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:22:35 old-k8s-version-508612 kubelet[3076]: E1213 00:22:35.787450    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:22:49 old-k8s-version-508612 kubelet[3076]: E1213 00:22:49.787942    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:23:00 old-k8s-version-508612 kubelet[3076]: E1213 00:23:00.787469    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:23:14 old-k8s-version-508612 kubelet[3076]: E1213 00:23:14.788225    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:23:26 old-k8s-version-508612 kubelet[3076]: E1213 00:23:26.787108    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:23:37 old-k8s-version-508612 kubelet[3076]: E1213 00:23:37.787142    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:23:48 old-k8s-version-508612 kubelet[3076]: E1213 00:23:48.788099    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:24:02 old-k8s-version-508612 kubelet[3076]: E1213 00:24:02.787165    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:24:17 old-k8s-version-508612 kubelet[3076]: E1213 00:24:17.787305    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:24:32 old-k8s-version-508612 kubelet[3076]: E1213 00:24:32.787001    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:24:43 old-k8s-version-508612 kubelet[3076]: E1213 00:24:43.786865    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:24:55 old-k8s-version-508612 kubelet[3076]: E1213 00:24:55.791462    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:25:07 old-k8s-version-508612 kubelet[3076]: E1213 00:25:07.786972    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:25:08 old-k8s-version-508612 kubelet[3076]: E1213 00:25:08.867874    3076 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Dec 13 00:25:21 old-k8s-version-508612 kubelet[3076]: E1213 00:25:21.786844    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:25:35 old-k8s-version-508612 kubelet[3076]: E1213 00:25:35.786864    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:25:49 old-k8s-version-508612 kubelet[3076]: E1213 00:25:49.787443    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [b7f9cca46c1cbbb355770d4ebb6032ce196df559af73e92da021f845d1b871df] <==
	* I1213 00:15:37.913568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 00:15:37.930431       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 00:15:37.930609       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 00:15:37.981995       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 00:15:37.982521       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e1a7b4b6-fff3-46ce-a8ea-3cbbb6c64a75", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-508612_76c671eb-525c-45d0-99b7-29d2ca8eea49 became leader
	I1213 00:15:37.982573       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-508612_76c671eb-525c-45d0-99b7-29d2ca8eea49!
	I1213 00:15:38.092270       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-508612_76c671eb-525c-45d0-99b7-29d2ca8eea49!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-508612 -n old-k8s-version-508612
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-508612 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-xcqf5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-508612 describe pod metrics-server-74d5856cc6-xcqf5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-508612 describe pod metrics-server-74d5856cc6-xcqf5: exit status 1 (67.843638ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-xcqf5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-508612 describe pod metrics-server-74d5856cc6-xcqf5: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (353.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-335807 -n embed-certs-335807
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-13 00:28:58.64042048 +0000 UTC m=+5662.196558027
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-335807 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-335807 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.836µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-335807 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-335807 -n embed-certs-335807
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-335807 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-335807 logs -n 25: (1.266381738s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343019 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	|         | disable-driver-mounts-343019                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:01 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-508612        | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-335807            | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143586             | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-743278  | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC | 13 Dec 23 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC |                     |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-508612             | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC | 13 Dec 23 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-335807                 | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143586                  | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-743278       | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:28 UTC | 13 Dec 23 00:28 UTC |
	| start   | -p newest-cni-628189 --memory=2200 --alsologtostderr   | newest-cni-628189            | jenkins | v1.32.0 | 13 Dec 23 00:28 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:28 UTC | 13 Dec 23 00:28 UTC |
	| start   | -p auto-120988 --memory=3072                           | auto-120988                  | jenkins | v1.32.0 | 13 Dec 23 00:28 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/13 00:28:56
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 00:28:56.676694  183173 out.go:296] Setting OutFile to fd 1 ...
	I1213 00:28:56.676836  183173 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:28:56.676846  183173 out.go:309] Setting ErrFile to fd 2...
	I1213 00:28:56.676851  183173 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:28:56.677084  183173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1213 00:28:56.677732  183173 out.go:303] Setting JSON to false
	I1213 00:28:56.678830  183173 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11485,"bootTime":1702415852,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 00:28:56.678891  183173 start.go:138] virtualization: kvm guest
	I1213 00:28:56.681246  183173 out.go:177] * [auto-120988] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 00:28:56.682928  183173 out.go:177]   - MINIKUBE_LOCATION=17777
	I1213 00:28:56.682940  183173 notify.go:220] Checking for updates...
	I1213 00:28:56.684501  183173 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 00:28:56.686210  183173 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:28:56.687770  183173 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1213 00:28:56.689191  183173 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 00:28:56.690459  183173 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 00:28:56.692253  183173 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:28:56.692407  183173 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:28:56.692580  183173 config.go:182] Loaded profile config "newest-cni-628189": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:28:56.692682  183173 driver.go:392] Setting default libvirt URI to qemu:///system
	I1213 00:28:56.729849  183173 out.go:177] * Using the kvm2 driver based on user configuration
	I1213 00:28:56.731154  183173 start.go:298] selected driver: kvm2
	I1213 00:28:56.731170  183173 start.go:902] validating driver "kvm2" against <nil>
	I1213 00:28:56.731184  183173 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 00:28:56.731950  183173 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:28:56.732030  183173 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 00:28:56.747835  183173 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1213 00:28:56.747892  183173 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1213 00:28:56.748099  183173 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 00:28:56.748168  183173 cni.go:84] Creating CNI manager for ""
	I1213 00:28:56.748180  183173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:28:56.748190  183173 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 00:28:56.748198  183173 start_flags.go:323] config:
	{Name:auto-120988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-120988 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:28:56.748331  183173 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:28:56.750221  183173 out.go:177] * Starting control plane node auto-120988 in cluster auto-120988
	I1213 00:28:52.566958  182846 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1213 00:28:52.567121  182846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:28:52.567156  182846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:28:52.581450  182846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42617
	I1213 00:28:52.581863  182846 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:28:52.582474  182846 main.go:141] libmachine: Using API Version  1
	I1213 00:28:52.582505  182846 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:28:52.582872  182846 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:28:52.583082  182846 main.go:141] libmachine: (newest-cni-628189) Calling .GetMachineName
	I1213 00:28:52.583255  182846 main.go:141] libmachine: (newest-cni-628189) Calling .DriverName
	I1213 00:28:52.583420  182846 start.go:159] libmachine.API.Create for "newest-cni-628189" (driver="kvm2")
	I1213 00:28:52.583454  182846 client.go:168] LocalClient.Create starting
	I1213 00:28:52.583481  182846 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem
	I1213 00:28:52.583513  182846 main.go:141] libmachine: Decoding PEM data...
	I1213 00:28:52.583531  182846 main.go:141] libmachine: Parsing certificate...
	I1213 00:28:52.583588  182846 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem
	I1213 00:28:52.583605  182846 main.go:141] libmachine: Decoding PEM data...
	I1213 00:28:52.583621  182846 main.go:141] libmachine: Parsing certificate...
	I1213 00:28:52.583636  182846 main.go:141] libmachine: Running pre-create checks...
	I1213 00:28:52.583648  182846 main.go:141] libmachine: (newest-cni-628189) Calling .PreCreateCheck
	I1213 00:28:52.584092  182846 main.go:141] libmachine: (newest-cni-628189) Calling .GetConfigRaw
	I1213 00:28:52.584528  182846 main.go:141] libmachine: Creating machine...
	I1213 00:28:52.584542  182846 main.go:141] libmachine: (newest-cni-628189) Calling .Create
	I1213 00:28:52.584684  182846 main.go:141] libmachine: (newest-cni-628189) Creating KVM machine...
	I1213 00:28:52.585978  182846 main.go:141] libmachine: (newest-cni-628189) DBG | found existing default KVM network
	I1213 00:28:52.587647  182846 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:28:52.587484  182869 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147f10}
	I1213 00:28:52.593167  182846 main.go:141] libmachine: (newest-cni-628189) DBG | trying to create private KVM network mk-newest-cni-628189 192.168.39.0/24...
	I1213 00:28:52.671071  182846 main.go:141] libmachine: (newest-cni-628189) DBG | private KVM network mk-newest-cni-628189 192.168.39.0/24 created
	I1213 00:28:52.671104  182846 main.go:141] libmachine: (newest-cni-628189) Setting up store path in /home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189 ...
	I1213 00:28:52.671121  182846 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:28:52.671030  182869 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17777-136241/.minikube
	I1213 00:28:52.671141  182846 main.go:141] libmachine: (newest-cni-628189) Building disk image from file:///home/jenkins/minikube-integration/17777-136241/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1213 00:28:52.671253  182846 main.go:141] libmachine: (newest-cni-628189) Downloading /home/jenkins/minikube-integration/17777-136241/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17777-136241/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1213 00:28:52.897937  182846 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:28:52.897798  182869 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189/id_rsa...
	I1213 00:28:53.243961  182846 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:28:53.243823  182869 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189/newest-cni-628189.rawdisk...
	I1213 00:28:53.243990  182846 main.go:141] libmachine: (newest-cni-628189) DBG | Writing magic tar header
	I1213 00:28:53.244017  182846 main.go:141] libmachine: (newest-cni-628189) DBG | Writing SSH key tar header
	I1213 00:28:53.244031  182846 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:28:53.243960  182869 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189 ...
	I1213 00:28:53.244586  182846 main.go:141] libmachine: (newest-cni-628189) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189
	I1213 00:28:53.244612  182846 main.go:141] libmachine: (newest-cni-628189) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube/machines
	I1213 00:28:53.244626  182846 main.go:141] libmachine: (newest-cni-628189) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189 (perms=drwx------)
	I1213 00:28:53.244639  182846 main.go:141] libmachine: (newest-cni-628189) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241/.minikube
	I1213 00:28:53.244655  182846 main.go:141] libmachine: (newest-cni-628189) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17777-136241
	I1213 00:28:53.244666  182846 main.go:141] libmachine: (newest-cni-628189) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1213 00:28:53.244680  182846 main.go:141] libmachine: (newest-cni-628189) DBG | Checking permissions on dir: /home/jenkins
	I1213 00:28:53.244689  182846 main.go:141] libmachine: (newest-cni-628189) DBG | Checking permissions on dir: /home
	I1213 00:28:53.244705  182846 main.go:141] libmachine: (newest-cni-628189) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube/machines (perms=drwxr-xr-x)
	I1213 00:28:53.244718  182846 main.go:141] libmachine: (newest-cni-628189) DBG | Skipping /home - not owner
	I1213 00:28:53.244755  182846 main.go:141] libmachine: (newest-cni-628189) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241/.minikube (perms=drwxr-xr-x)
	I1213 00:28:53.244778  182846 main.go:141] libmachine: (newest-cni-628189) Setting executable bit set on /home/jenkins/minikube-integration/17777-136241 (perms=drwxrwxr-x)
	I1213 00:28:53.244862  182846 main.go:141] libmachine: (newest-cni-628189) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1213 00:28:53.244895  182846 main.go:141] libmachine: (newest-cni-628189) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1213 00:28:53.244919  182846 main.go:141] libmachine: (newest-cni-628189) Creating domain...
	I1213 00:28:53.246160  182846 main.go:141] libmachine: (newest-cni-628189) define libvirt domain using xml: 
	I1213 00:28:53.246181  182846 main.go:141] libmachine: (newest-cni-628189) <domain type='kvm'>
	I1213 00:28:53.246189  182846 main.go:141] libmachine: (newest-cni-628189)   <name>newest-cni-628189</name>
	I1213 00:28:53.246198  182846 main.go:141] libmachine: (newest-cni-628189)   <memory unit='MiB'>2200</memory>
	I1213 00:28:53.246226  182846 main.go:141] libmachine: (newest-cni-628189)   <vcpu>2</vcpu>
	I1213 00:28:53.246249  182846 main.go:141] libmachine: (newest-cni-628189)   <features>
	I1213 00:28:53.246262  182846 main.go:141] libmachine: (newest-cni-628189)     <acpi/>
	I1213 00:28:53.246273  182846 main.go:141] libmachine: (newest-cni-628189)     <apic/>
	I1213 00:28:53.246298  182846 main.go:141] libmachine: (newest-cni-628189)     <pae/>
	I1213 00:28:53.246319  182846 main.go:141] libmachine: (newest-cni-628189)     
	I1213 00:28:53.246329  182846 main.go:141] libmachine: (newest-cni-628189)   </features>
	I1213 00:28:53.246345  182846 main.go:141] libmachine: (newest-cni-628189)   <cpu mode='host-passthrough'>
	I1213 00:28:53.246358  182846 main.go:141] libmachine: (newest-cni-628189)   
	I1213 00:28:53.246368  182846 main.go:141] libmachine: (newest-cni-628189)   </cpu>
	I1213 00:28:53.246387  182846 main.go:141] libmachine: (newest-cni-628189)   <os>
	I1213 00:28:53.246401  182846 main.go:141] libmachine: (newest-cni-628189)     <type>hvm</type>
	I1213 00:28:53.246415  182846 main.go:141] libmachine: (newest-cni-628189)     <boot dev='cdrom'/>
	I1213 00:28:53.246434  182846 main.go:141] libmachine: (newest-cni-628189)     <boot dev='hd'/>
	I1213 00:28:53.246448  182846 main.go:141] libmachine: (newest-cni-628189)     <bootmenu enable='no'/>
	I1213 00:28:53.246463  182846 main.go:141] libmachine: (newest-cni-628189)   </os>
	I1213 00:28:53.246477  182846 main.go:141] libmachine: (newest-cni-628189)   <devices>
	I1213 00:28:53.246489  182846 main.go:141] libmachine: (newest-cni-628189)     <disk type='file' device='cdrom'>
	I1213 00:28:53.246506  182846 main.go:141] libmachine: (newest-cni-628189)       <source file='/home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189/boot2docker.iso'/>
	I1213 00:28:53.246522  182846 main.go:141] libmachine: (newest-cni-628189)       <target dev='hdc' bus='scsi'/>
	I1213 00:28:53.246540  182846 main.go:141] libmachine: (newest-cni-628189)       <readonly/>
	I1213 00:28:53.246553  182846 main.go:141] libmachine: (newest-cni-628189)     </disk>
	I1213 00:28:53.246565  182846 main.go:141] libmachine: (newest-cni-628189)     <disk type='file' device='disk'>
	I1213 00:28:53.246580  182846 main.go:141] libmachine: (newest-cni-628189)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1213 00:28:53.246600  182846 main.go:141] libmachine: (newest-cni-628189)       <source file='/home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189/newest-cni-628189.rawdisk'/>
	I1213 00:28:53.246614  182846 main.go:141] libmachine: (newest-cni-628189)       <target dev='hda' bus='virtio'/>
	I1213 00:28:53.246626  182846 main.go:141] libmachine: (newest-cni-628189)     </disk>
	I1213 00:28:53.246640  182846 main.go:141] libmachine: (newest-cni-628189)     <interface type='network'>
	I1213 00:28:53.246653  182846 main.go:141] libmachine: (newest-cni-628189)       <source network='mk-newest-cni-628189'/>
	I1213 00:28:53.246667  182846 main.go:141] libmachine: (newest-cni-628189)       <model type='virtio'/>
	I1213 00:28:53.246683  182846 main.go:141] libmachine: (newest-cni-628189)     </interface>
	I1213 00:28:53.246697  182846 main.go:141] libmachine: (newest-cni-628189)     <interface type='network'>
	I1213 00:28:53.246711  182846 main.go:141] libmachine: (newest-cni-628189)       <source network='default'/>
	I1213 00:28:53.246729  182846 main.go:141] libmachine: (newest-cni-628189)       <model type='virtio'/>
	I1213 00:28:53.246743  182846 main.go:141] libmachine: (newest-cni-628189)     </interface>
	I1213 00:28:53.246756  182846 main.go:141] libmachine: (newest-cni-628189)     <serial type='pty'>
	I1213 00:28:53.246772  182846 main.go:141] libmachine: (newest-cni-628189)       <target port='0'/>
	I1213 00:28:53.246784  182846 main.go:141] libmachine: (newest-cni-628189)     </serial>
	I1213 00:28:53.246799  182846 main.go:141] libmachine: (newest-cni-628189)     <console type='pty'>
	I1213 00:28:53.246813  182846 main.go:141] libmachine: (newest-cni-628189)       <target type='serial' port='0'/>
	I1213 00:28:53.246825  182846 main.go:141] libmachine: (newest-cni-628189)     </console>
	I1213 00:28:53.246838  182846 main.go:141] libmachine: (newest-cni-628189)     <rng model='virtio'>
	I1213 00:28:53.246855  182846 main.go:141] libmachine: (newest-cni-628189)       <backend model='random'>/dev/random</backend>
	I1213 00:28:53.246867  182846 main.go:141] libmachine: (newest-cni-628189)     </rng>
	I1213 00:28:53.246879  182846 main.go:141] libmachine: (newest-cni-628189)     
	I1213 00:28:53.246891  182846 main.go:141] libmachine: (newest-cni-628189)     
	I1213 00:28:53.246902  182846 main.go:141] libmachine: (newest-cni-628189)   </devices>
	I1213 00:28:53.246913  182846 main.go:141] libmachine: (newest-cni-628189) </domain>
	I1213 00:28:53.246927  182846 main.go:141] libmachine: (newest-cni-628189) 
	I1213 00:28:53.251096  182846 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:13:83:23 in network default
	I1213 00:28:53.251681  182846 main.go:141] libmachine: (newest-cni-628189) Ensuring networks are active...
	I1213 00:28:53.251698  182846 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:28:53.252475  182846 main.go:141] libmachine: (newest-cni-628189) Ensuring network default is active
	I1213 00:28:53.252792  182846 main.go:141] libmachine: (newest-cni-628189) Ensuring network mk-newest-cni-628189 is active
	I1213 00:28:53.253290  182846 main.go:141] libmachine: (newest-cni-628189) Getting domain xml...
	I1213 00:28:53.254116  182846 main.go:141] libmachine: (newest-cni-628189) Creating domain...
	I1213 00:28:54.651178  182846 main.go:141] libmachine: (newest-cni-628189) Waiting to get IP...
	I1213 00:28:54.652196  182846 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:28:54.652775  182846 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:28:54.652852  182846 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:28:54.652752  182869 retry.go:31] will retry after 299.605979ms: waiting for machine to come up
	I1213 00:28:54.954519  182846 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:28:54.955089  182846 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:28:54.955125  182846 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:28:54.955006  182869 retry.go:31] will retry after 241.579242ms: waiting for machine to come up
	I1213 00:28:55.198416  182846 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:28:55.198896  182846 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:28:55.198921  182846 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:28:55.198842  182869 retry.go:31] will retry after 299.083416ms: waiting for machine to come up
	I1213 00:28:55.499525  182846 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:28:55.500081  182846 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:28:55.500108  182846 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:28:55.500045  182869 retry.go:31] will retry after 443.02179ms: waiting for machine to come up
	I1213 00:28:55.944319  182846 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:28:56.005411  182846 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:28:56.005445  182846 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:28:56.005291  182869 retry.go:31] will retry after 707.435917ms: waiting for machine to come up
	I1213 00:28:56.714478  182846 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:28:56.714948  182846 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:28:56.714981  182846 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:28:56.714897  182869 retry.go:31] will retry after 612.374899ms: waiting for machine to come up
	I1213 00:28:57.328780  182846 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:28:57.329356  182846 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:28:57.329394  182846 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:28:57.329302  182869 retry.go:31] will retry after 724.517737ms: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-13 00:08:25 UTC, ends at Wed 2023-12-13 00:28:59 UTC. --
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.423180177Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c72423d6-9ec6-4ddf-a458-f537d5ec1557 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.424985113Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=82599f3c-48b2-4284-815f-b94822195a60 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.425979154Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427339425961212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=82599f3c-48b2-4284-815f-b94822195a60 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.426601531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4ddf1859-73d2-42ab-abc5-640ead5c2d29 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.426644679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4ddf1859-73d2-42ab-abc5-640ead5c2d29 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.426909178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0d17c42c09c56f857947503ff059c73b2692795fd48cd37dede5099ef99bd8c,PodSandboxId:2afacfbbbbfe13da138f95ddca98b10ce74facab728ed724a88c7f181212cb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426441639011725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816660d7-a041-4695-b7da-d977b8891935,},Annotations:map[string]string{io.kubernetes.container.hash: bd5eb70a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339d0782bfacf9b197014c96754a0f59569fcfb63fce1bb4f90bed7d66518056,PodSandboxId:759ce7bd9ba388e0908284d15942a340aa84a30244f472c249403d48c5b75d08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426441136919423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ccq47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68f3c55f-175e-40af-a769-65c859d5012d,},Annotations:map[string]string{io.kubernetes.container.hash: 771d23cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8146da064c98405a2d2cdcd64c3f2a6e5580e6a9bbfadac2bdfc875edeb7378,PodSandboxId:1002d62d8148b0ca1dc41ef56f63c1743a44b012fe1b5ce44abd5346e5d2513e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426440542875459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gs4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4b86e83-a0a1-4bf8-958e-e154e91f47ef,},Annotations:map[string]string{io.kubernetes.container.hash: bea787ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42423e8c2a4c3add467f60fe17710b5a0f6c79b2384aa761d1aad5e15519f61,PodSandboxId:1fd6c600d898c6263f1f31c9cea5336253a12742e56b8d9d9eff498cffcfec94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426417123895163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: c31cdd67a6e054cf9c0b1601f37db20e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad38ad2ba8d7ef925b8a1713b6636c494e5fc4aae29f70052c75a59a0f5f6503,PodSandboxId:daec354eb5e8a6856e6964fa58f444f5407026303401242d34c88de2533b6449,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426416912822073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0824a86eab624ba769ff3e04bee2867a,},Annotations:
map[string]string{io.kubernetes.container.hash: f29cbe9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c402daaf59971db0446bc27bb2a56f520952f33b29a0226962d19dbfa6ab69f9,PodSandboxId:da1c47eff717913d3accff39043e87ca6bffed2075e1ff9aeb17a728ceba8468,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426416763737114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fdb93043e71a6cbe9511612a78a69a1,},Annotations:map[string
]string{io.kubernetes.container.hash: 60ab00ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b771f8110ea5225d736a60f1faa7ddc9ac8c341202c9bff8c1c82f46d16082c1,PodSandboxId:154d2c4d08454757e63a28ea0f551da1c3b91db6b21adb01eb12bf0017127246,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426416669288068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb76d93a779cccf3f04273dc3f836d
5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4ddf1859-73d2-42ab-abc5-640ead5c2d29 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.463578250Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=33afa604-2ff0-47aa-aefc-05e7f5c31944 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.463977413Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2afacfbbbbfe13da138f95ddca98b10ce74facab728ed724a88c7f181212cb0e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:816660d7-a041-4695-b7da-d977b8891935,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702426440592142258,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816660d7-a041-4695-b7da-d977b8891935,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-13T00:14:00.218933330Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8ff5c49e52ecb9a4dd4ec9e7baa315de01bea3cc5c844ca68bf8ec9b29304bc,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-z7qb4,Uid:b33959c3-63b7-4a81-adda-6d2971036e89,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702426440330313623,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-z7qb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b33959c3-63b7-4a81-adda-6d2971036e8
9,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-13T00:13:59.973671746Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1002d62d8148b0ca1dc41ef56f63c1743a44b012fe1b5ce44abd5346e5d2513e,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-gs4kb,Uid:d4b86e83-a0a1-4bf8-958e-e154e91f47ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702426438048373249,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-gs4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4b86e83-a0a1-4bf8-958e-e154e91f47ef,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-13T00:13:57.408422625Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:759ce7bd9ba388e0908284d15942a340aa84a30244f472c249403d48c5b75d08,Metadata:&PodSandboxMetadata{Name:kube-proxy-ccq47,Uid:68f3c55f-175e-40af-a769-65
c859d5012d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702426437462114143,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ccq47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68f3c55f-175e-40af-a769-65c859d5012d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-13T00:13:57.114091953Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:da1c47eff717913d3accff39043e87ca6bffed2075e1ff9aeb17a728ceba8468,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-335807,Uid:5fdb93043e71a6cbe9511612a78a69a1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702426416079127067,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fdb93043e71a6cbe9511612a
78a69a1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.249:8443,kubernetes.io/config.hash: 5fdb93043e71a6cbe9511612a78a69a1,kubernetes.io/config.seen: 2023-12-13T00:13:35.514949853Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1fd6c600d898c6263f1f31c9cea5336253a12742e56b8d9d9eff498cffcfec94,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-335807,Uid:c31cdd67a6e054cf9c0b1601f37db20e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702426416073747832,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31cdd67a6e054cf9c0b1601f37db20e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c31cdd67a6e054cf9c0b1601f37db20e,kubernetes.io/config.seen: 2023-12-13T00:13:35.514954443Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:daec354eb5e8a6856e6964fa58f444f5407026303401242d34c88de2533b6449,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-335807,Uid:0824a86eab624ba769ff3e04bee2867a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702426416058930912,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0824a86eab624ba769ff3e04bee2867a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.249:2379,kubernetes.io/config.hash: 0824a86eab624ba769ff3e04bee2867a,kubernetes.io/config.seen: 2023-12-13T00:13:35.514955290Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:154d2c4d08454757e63a28ea0f551da1c3b91db6b21adb01eb12bf0017127246,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-335807,Uid:7eb76d93a779cccf3f04273dc3f836d5,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1702426416016163865,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb76d93a779cccf3f04273dc3f836d5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7eb76d93a779cccf3f04273dc3f836d5,kubernetes.io/config.seen: 2023-12-13T00:13:35.514953560Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=33afa604-2ff0-47aa-aefc-05e7f5c31944 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.464943368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=99faa672-906d-4fd6-a0c5-9b556691ac48 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.465009226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=99faa672-906d-4fd6-a0c5-9b556691ac48 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.465173202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0d17c42c09c56f857947503ff059c73b2692795fd48cd37dede5099ef99bd8c,PodSandboxId:2afacfbbbbfe13da138f95ddca98b10ce74facab728ed724a88c7f181212cb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426441639011725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816660d7-a041-4695-b7da-d977b8891935,},Annotations:map[string]string{io.kubernetes.container.hash: bd5eb70a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339d0782bfacf9b197014c96754a0f59569fcfb63fce1bb4f90bed7d66518056,PodSandboxId:759ce7bd9ba388e0908284d15942a340aa84a30244f472c249403d48c5b75d08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426441136919423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ccq47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68f3c55f-175e-40af-a769-65c859d5012d,},Annotations:map[string]string{io.kubernetes.container.hash: 771d23cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8146da064c98405a2d2cdcd64c3f2a6e5580e6a9bbfadac2bdfc875edeb7378,PodSandboxId:1002d62d8148b0ca1dc41ef56f63c1743a44b012fe1b5ce44abd5346e5d2513e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426440542875459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gs4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4b86e83-a0a1-4bf8-958e-e154e91f47ef,},Annotations:map[string]string{io.kubernetes.container.hash: bea787ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42423e8c2a4c3add467f60fe17710b5a0f6c79b2384aa761d1aad5e15519f61,PodSandboxId:1fd6c600d898c6263f1f31c9cea5336253a12742e56b8d9d9eff498cffcfec94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426417123895163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: c31cdd67a6e054cf9c0b1601f37db20e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad38ad2ba8d7ef925b8a1713b6636c494e5fc4aae29f70052c75a59a0f5f6503,PodSandboxId:daec354eb5e8a6856e6964fa58f444f5407026303401242d34c88de2533b6449,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426416912822073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0824a86eab624ba769ff3e04bee2867a,},Annotations:
map[string]string{io.kubernetes.container.hash: f29cbe9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c402daaf59971db0446bc27bb2a56f520952f33b29a0226962d19dbfa6ab69f9,PodSandboxId:da1c47eff717913d3accff39043e87ca6bffed2075e1ff9aeb17a728ceba8468,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426416763737114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fdb93043e71a6cbe9511612a78a69a1,},Annotations:map[string
]string{io.kubernetes.container.hash: 60ab00ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b771f8110ea5225d736a60f1faa7ddc9ac8c341202c9bff8c1c82f46d16082c1,PodSandboxId:154d2c4d08454757e63a28ea0f551da1c3b91db6b21adb01eb12bf0017127246,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426416669288068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb76d93a779cccf3f04273dc3f836d
5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=99faa672-906d-4fd6-a0c5-9b556691ac48 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.470806281Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=564a5986-02a2-482a-80a8-54bef2eb9fae name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.470875752Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=564a5986-02a2-482a-80a8-54bef2eb9fae name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.472412657Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=22e9b18e-0612-44e5-960d-d181e03c8e53 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.472921471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427339472905307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=22e9b18e-0612-44e5-960d-d181e03c8e53 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.473731355Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=80072b57-04d9-4178-b63a-2e26d2577487 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.473847853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=80072b57-04d9-4178-b63a-2e26d2577487 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.474015123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0d17c42c09c56f857947503ff059c73b2692795fd48cd37dede5099ef99bd8c,PodSandboxId:2afacfbbbbfe13da138f95ddca98b10ce74facab728ed724a88c7f181212cb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426441639011725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816660d7-a041-4695-b7da-d977b8891935,},Annotations:map[string]string{io.kubernetes.container.hash: bd5eb70a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339d0782bfacf9b197014c96754a0f59569fcfb63fce1bb4f90bed7d66518056,PodSandboxId:759ce7bd9ba388e0908284d15942a340aa84a30244f472c249403d48c5b75d08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426441136919423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ccq47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68f3c55f-175e-40af-a769-65c859d5012d,},Annotations:map[string]string{io.kubernetes.container.hash: 771d23cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8146da064c98405a2d2cdcd64c3f2a6e5580e6a9bbfadac2bdfc875edeb7378,PodSandboxId:1002d62d8148b0ca1dc41ef56f63c1743a44b012fe1b5ce44abd5346e5d2513e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426440542875459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gs4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4b86e83-a0a1-4bf8-958e-e154e91f47ef,},Annotations:map[string]string{io.kubernetes.container.hash: bea787ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42423e8c2a4c3add467f60fe17710b5a0f6c79b2384aa761d1aad5e15519f61,PodSandboxId:1fd6c600d898c6263f1f31c9cea5336253a12742e56b8d9d9eff498cffcfec94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426417123895163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: c31cdd67a6e054cf9c0b1601f37db20e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad38ad2ba8d7ef925b8a1713b6636c494e5fc4aae29f70052c75a59a0f5f6503,PodSandboxId:daec354eb5e8a6856e6964fa58f444f5407026303401242d34c88de2533b6449,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426416912822073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0824a86eab624ba769ff3e04bee2867a,},Annotations:
map[string]string{io.kubernetes.container.hash: f29cbe9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c402daaf59971db0446bc27bb2a56f520952f33b29a0226962d19dbfa6ab69f9,PodSandboxId:da1c47eff717913d3accff39043e87ca6bffed2075e1ff9aeb17a728ceba8468,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426416763737114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fdb93043e71a6cbe9511612a78a69a1,},Annotations:map[string
]string{io.kubernetes.container.hash: 60ab00ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b771f8110ea5225d736a60f1faa7ddc9ac8c341202c9bff8c1c82f46d16082c1,PodSandboxId:154d2c4d08454757e63a28ea0f551da1c3b91db6b21adb01eb12bf0017127246,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426416669288068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb76d93a779cccf3f04273dc3f836d
5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=80072b57-04d9-4178-b63a-2e26d2577487 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.513375145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7f4b0008-66ff-4b9d-b12d-fe20995de654 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.513462777Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7f4b0008-66ff-4b9d-b12d-fe20995de654 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.515417400Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c74209b5-c32f-48d6-bc1f-d852af94addf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.515859814Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427339515846378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c74209b5-c32f-48d6-bc1f-d852af94addf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.516985906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9b43ab99-5dad-4111-b4ef-c32e96c9ef34 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.517057699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9b43ab99-5dad-4111-b4ef-c32e96c9ef34 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:59 embed-certs-335807 crio[726]: time="2023-12-13 00:28:59.517209604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0d17c42c09c56f857947503ff059c73b2692795fd48cd37dede5099ef99bd8c,PodSandboxId:2afacfbbbbfe13da138f95ddca98b10ce74facab728ed724a88c7f181212cb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426441639011725,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816660d7-a041-4695-b7da-d977b8891935,},Annotations:map[string]string{io.kubernetes.container.hash: bd5eb70a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:339d0782bfacf9b197014c96754a0f59569fcfb63fce1bb4f90bed7d66518056,PodSandboxId:759ce7bd9ba388e0908284d15942a340aa84a30244f472c249403d48c5b75d08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426441136919423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ccq47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68f3c55f-175e-40af-a769-65c859d5012d,},Annotations:map[string]string{io.kubernetes.container.hash: 771d23cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8146da064c98405a2d2cdcd64c3f2a6e5580e6a9bbfadac2bdfc875edeb7378,PodSandboxId:1002d62d8148b0ca1dc41ef56f63c1743a44b012fe1b5ce44abd5346e5d2513e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426440542875459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gs4kb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4b86e83-a0a1-4bf8-958e-e154e91f47ef,},Annotations:map[string]string{io.kubernetes.container.hash: bea787ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42423e8c2a4c3add467f60fe17710b5a0f6c79b2384aa761d1aad5e15519f61,PodSandboxId:1fd6c600d898c6263f1f31c9cea5336253a12742e56b8d9d9eff498cffcfec94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426417123895163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: c31cdd67a6e054cf9c0b1601f37db20e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad38ad2ba8d7ef925b8a1713b6636c494e5fc4aae29f70052c75a59a0f5f6503,PodSandboxId:daec354eb5e8a6856e6964fa58f444f5407026303401242d34c88de2533b6449,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426416912822073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0824a86eab624ba769ff3e04bee2867a,},Annotations:
map[string]string{io.kubernetes.container.hash: f29cbe9d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c402daaf59971db0446bc27bb2a56f520952f33b29a0226962d19dbfa6ab69f9,PodSandboxId:da1c47eff717913d3accff39043e87ca6bffed2075e1ff9aeb17a728ceba8468,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426416763737114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fdb93043e71a6cbe9511612a78a69a1,},Annotations:map[string
]string{io.kubernetes.container.hash: 60ab00ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b771f8110ea5225d736a60f1faa7ddc9ac8c341202c9bff8c1c82f46d16082c1,PodSandboxId:154d2c4d08454757e63a28ea0f551da1c3b91db6b21adb01eb12bf0017127246,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426416669288068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-335807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb76d93a779cccf3f04273dc3f836d
5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9b43ab99-5dad-4111-b4ef-c32e96c9ef34 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e0d17c42c09c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   2afacfbbbbfe1       storage-provisioner
	339d0782bfacf       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   759ce7bd9ba38       kube-proxy-ccq47
	c8146da064c98       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   1002d62d8148b       coredns-5dd5756b68-gs4kb
	d42423e8c2a4c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   1fd6c600d898c       kube-scheduler-embed-certs-335807
	ad38ad2ba8d7e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   daec354eb5e8a       etcd-embed-certs-335807
	c402daaf59971       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   da1c47eff7179       kube-apiserver-embed-certs-335807
	b771f8110ea52       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   154d2c4d08454       kube-controller-manager-embed-certs-335807
	
	* 
	* ==> coredns [c8146da064c98405a2d2cdcd64c3f2a6e5580e6a9bbfadac2bdfc875edeb7378] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:36328 - 4273 "HINFO IN 8678516472761787121.6070623347578583618. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014356926s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-335807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-335807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=embed-certs-335807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_13T00_13_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Dec 2023 00:13:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-335807
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Dec 2023 00:28:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Dec 2023 00:24:17 +0000   Wed, 13 Dec 2023 00:13:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Dec 2023 00:24:17 +0000   Wed, 13 Dec 2023 00:13:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Dec 2023 00:24:17 +0000   Wed, 13 Dec 2023 00:13:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Dec 2023 00:24:17 +0000   Wed, 13 Dec 2023 00:13:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.249
	  Hostname:    embed-certs-335807
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f86487f5eff8493d8f8c3113884f4708
	  System UUID:                f86487f5-eff8-493d-8f8c-3113884f4708
	  Boot ID:                    4e2e7d95-2434-46bf-b05f-70d0b33de31f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-gs4kb                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-335807                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-335807             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-335807    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-ccq47                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-335807             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-z7qb4               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-335807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-335807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-335807 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-335807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-335807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-335807 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node embed-certs-335807 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node embed-certs-335807 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-335807 event: Registered Node embed-certs-335807 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec13 00:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068122] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.371535] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.471790] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.134572] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.400982] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.436867] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.108898] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.141874] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.125633] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.207261] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[ +17.675083] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[Dec13 00:09] kauditd_printk_skb: 34 callbacks suppressed
	[Dec13 00:13] kauditd_printk_skb: 2 callbacks suppressed
	[  +4.996799] systemd-fstab-generator[3704]: Ignoring "noauto" for root device
	[  +9.805914] systemd-fstab-generator[4029]: Ignoring "noauto" for root device
	[Dec13 00:14] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [ad38ad2ba8d7ef925b8a1713b6636c494e5fc4aae29f70052c75a59a0f5f6503] <==
	* {"level":"info","ts":"2023-12-13T00:13:38.955185Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-13T00:13:39.478063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7bf18ae696d1660f is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-13T00:13:39.478166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7bf18ae696d1660f became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-13T00:13:39.478224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7bf18ae696d1660f received MsgPreVoteResp from 7bf18ae696d1660f at term 1"}
	{"level":"info","ts":"2023-12-13T00:13:39.478261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7bf18ae696d1660f became candidate at term 2"}
	{"level":"info","ts":"2023-12-13T00:13:39.478297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7bf18ae696d1660f received MsgVoteResp from 7bf18ae696d1660f at term 2"}
	{"level":"info","ts":"2023-12-13T00:13:39.478325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7bf18ae696d1660f became leader at term 2"}
	{"level":"info","ts":"2023-12-13T00:13:39.47835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7bf18ae696d1660f elected leader 7bf18ae696d1660f at term 2"}
	{"level":"info","ts":"2023-12-13T00:13:39.479661Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:13:39.481064Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7bf18ae696d1660f","local-member-attributes":"{Name:embed-certs-335807 ClientURLs:[https://192.168.61.249:2379]}","request-path":"/0/members/7bf18ae696d1660f/attributes","cluster-id":"573ffd3ad1c9e277","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-13T00:13:39.481224Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-13T00:13:39.481884Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"573ffd3ad1c9e277","local-member-id":"7bf18ae696d1660f","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:13:39.481992Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:13:39.482044Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:13:39.482717Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.249:2379"}
	{"level":"info","ts":"2023-12-13T00:13:39.483128Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-13T00:13:39.483961Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-13T00:13:39.485241Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-13T00:13:39.485285Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-13T00:23:39.834915Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":722}
	{"level":"info","ts":"2023-12-13T00:23:39.83873Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":722,"took":"3.197099ms","hash":373933374}
	{"level":"info","ts":"2023-12-13T00:23:39.838987Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":373933374,"revision":722,"compact-revision":-1}
	{"level":"info","ts":"2023-12-13T00:28:39.846018Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":965}
	{"level":"info","ts":"2023-12-13T00:28:39.848035Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":965,"took":"1.40103ms","hash":4161469143}
	{"level":"info","ts":"2023-12-13T00:28:39.848128Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4161469143,"revision":965,"compact-revision":722}
	
	* 
	* ==> kernel <==
	*  00:28:59 up 20 min,  0 users,  load average: 0.41, 0.42, 0.29
	Linux embed-certs-335807 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c402daaf59971db0446bc27bb2a56f520952f33b29a0226962d19dbfa6ab69f9] <==
	* E1213 00:24:42.469180       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:24:42.469187       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:25:41.331682       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1213 00:26:41.332029       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1213 00:26:42.468110       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:26:42.468188       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:26:42.468201       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:26:42.469719       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:26:42.469991       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:26:42.470030       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:27:41.331326       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1213 00:28:41.332240       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1213 00:28:41.472636       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:28:41.472751       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:28:41.473429       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1213 00:28:42.473026       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:28:42.473115       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:28:42.473127       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:28:42.473039       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:28:42.473197       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:28:42.474206       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [b771f8110ea5225d736a60f1faa7ddc9ac8c341202c9bff8c1c82f46d16082c1] <==
	* I1213 00:23:27.076448       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:23:56.450635       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:23:57.085599       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:24:26.458197       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:24:27.096107       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:24:56.467384       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:24:57.105666       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1213 00:25:19.135632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="341.723µs"
	E1213 00:25:26.474296       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:25:27.120676       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1213 00:25:34.131630       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="160.04µs"
	E1213 00:25:56.480583       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:25:57.130453       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:26:26.485969       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:26:27.143251       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:26:56.493137       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:26:57.155491       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:27:26.499008       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:27:27.166169       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:27:56.506620       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:27:57.197542       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:28:26.512502       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:28:27.205911       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:28:56.519054       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:28:57.215711       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [339d0782bfacf9b197014c96754a0f59569fcfb63fce1bb4f90bed7d66518056] <==
	* I1213 00:14:01.734610       1 server_others.go:69] "Using iptables proxy"
	I1213 00:14:01.767124       1 node.go:141] Successfully retrieved node IP: 192.168.61.249
	I1213 00:14:01.911911       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1213 00:14:01.912016       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 00:14:01.916869       1 server_others.go:152] "Using iptables Proxier"
	I1213 00:14:01.917575       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 00:14:01.917839       1 server.go:846] "Version info" version="v1.28.4"
	I1213 00:14:01.918463       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 00:14:01.922527       1 config.go:188] "Starting service config controller"
	I1213 00:14:01.922683       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 00:14:01.923217       1 config.go:97] "Starting endpoint slice config controller"
	I1213 00:14:01.923531       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 00:14:01.924146       1 config.go:315] "Starting node config controller"
	I1213 00:14:01.924198       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 00:14:02.023426       1 shared_informer.go:318] Caches are synced for service config
	I1213 00:14:02.023750       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1213 00:14:02.024459       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d42423e8c2a4c3add467f60fe17710b5a0f6c79b2384aa761d1aad5e15519f61] <==
	* E1213 00:13:41.488379       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 00:13:41.488385       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 00:13:41.488391       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 00:13:41.488398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1213 00:13:42.295553       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 00:13:42.295669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1213 00:13:42.395671       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1213 00:13:42.396157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1213 00:13:42.403128       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 00:13:42.403180       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1213 00:13:42.529386       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 00:13:42.529433       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 00:13:42.566646       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 00:13:42.566696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1213 00:13:42.688998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 00:13:42.689049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1213 00:13:42.740856       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 00:13:42.740907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1213 00:13:42.767127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1213 00:13:42.767183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1213 00:13:42.787559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 00:13:42.787682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1213 00:13:42.790624       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 00:13:42.790725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1213 00:13:44.967154       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-13 00:08:25 UTC, ends at Wed 2023-12-13 00:29:00 UTC. --
	Dec 13 00:26:38 embed-certs-335807 kubelet[4036]: E1213 00:26:38.114105    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:26:45 embed-certs-335807 kubelet[4036]: E1213 00:26:45.197095    4036 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:26:45 embed-certs-335807 kubelet[4036]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:26:45 embed-certs-335807 kubelet[4036]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:26:45 embed-certs-335807 kubelet[4036]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:26:50 embed-certs-335807 kubelet[4036]: E1213 00:26:50.121949    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:27:02 embed-certs-335807 kubelet[4036]: E1213 00:27:02.114512    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:27:17 embed-certs-335807 kubelet[4036]: E1213 00:27:17.114623    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:27:30 embed-certs-335807 kubelet[4036]: E1213 00:27:30.114344    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:27:42 embed-certs-335807 kubelet[4036]: E1213 00:27:42.114475    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:27:45 embed-certs-335807 kubelet[4036]: E1213 00:27:45.197470    4036 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:27:45 embed-certs-335807 kubelet[4036]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:27:45 embed-certs-335807 kubelet[4036]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:27:45 embed-certs-335807 kubelet[4036]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:27:53 embed-certs-335807 kubelet[4036]: E1213 00:27:53.114033    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:28:07 embed-certs-335807 kubelet[4036]: E1213 00:28:07.115336    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:28:19 embed-certs-335807 kubelet[4036]: E1213 00:28:19.114953    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:28:34 embed-certs-335807 kubelet[4036]: E1213 00:28:34.114824    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:28:45 embed-certs-335807 kubelet[4036]: E1213 00:28:45.121222    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	Dec 13 00:28:45 embed-certs-335807 kubelet[4036]: E1213 00:28:45.198913    4036 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:28:45 embed-certs-335807 kubelet[4036]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:28:45 embed-certs-335807 kubelet[4036]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:28:45 embed-certs-335807 kubelet[4036]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:28:45 embed-certs-335807 kubelet[4036]: E1213 00:28:45.348540    4036 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Dec 13 00:28:59 embed-certs-335807 kubelet[4036]: E1213 00:28:59.115887    4036 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z7qb4" podUID="b33959c3-63b7-4a81-adda-6d2971036e89"
	
	* 
	* ==> storage-provisioner [e0d17c42c09c56f857947503ff059c73b2692795fd48cd37dede5099ef99bd8c] <==
	* I1213 00:14:01.817023       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 00:14:01.828214       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 00:14:01.828307       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 00:14:01.873673       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 00:14:01.874059       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-335807_a72f11cb-1fd3-4017-b701-43bd84a93d17!
	I1213 00:14:01.876897       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a843e854-866a-4e87-b1b9-076260b696c7", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-335807_a72f11cb-1fd3-4017-b701-43bd84a93d17 became leader
	I1213 00:14:01.975483       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-335807_a72f11cb-1fd3-4017-b701-43bd84a93d17!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-335807 -n embed-certs-335807
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-335807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-z7qb4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-335807 describe pod metrics-server-57f55c9bc5-z7qb4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-335807 describe pod metrics-server-57f55c9bc5-z7qb4: exit status 1 (68.630681ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-z7qb4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-335807 describe pod metrics-server-57f55c9bc5-z7qb4: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (353.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (457.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-743278 -n default-k8s-diff-port-743278
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-13 00:30:52.807077832 +0000 UTC m=+5776.363215370
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-743278 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-743278 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.397µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-743278 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-743278 -n default-k8s-diff-port-743278
E1213 00:30:52.895746  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-743278 logs -n 25
E1213 00:30:54.176815  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-743278 logs -n 25: (1.448590257s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-335807            | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143586             | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-743278  | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC | 13 Dec 23 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC |                     |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-508612             | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC | 13 Dec 23 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-335807                 | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143586                  | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-743278       | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:28 UTC | 13 Dec 23 00:28 UTC |
	| start   | -p newest-cni-628189 --memory=2200 --alsologtostderr   | newest-cni-628189            | jenkins | v1.32.0 | 13 Dec 23 00:28 UTC | 13 Dec 23 00:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:28 UTC | 13 Dec 23 00:28 UTC |
	| start   | -p auto-120988 --memory=3072                           | auto-120988                  | jenkins | v1.32.0 | 13 Dec 23 00:28 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:29 UTC | 13 Dec 23 00:29 UTC |
	| start   | -p kindnet-120988                                      | kindnet-120988               | jenkins | v1.32.0 | 13 Dec 23 00:29 UTC | 13 Dec 23 00:30 UTC |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-628189             | newest-cni-628189            | jenkins | v1.32.0 | 13 Dec 23 00:29 UTC | 13 Dec 23 00:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-628189                                   | newest-cni-628189            | jenkins | v1.32.0 | 13 Dec 23 00:29 UTC | 13 Dec 23 00:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-628189                  | newest-cni-628189            | jenkins | v1.32.0 | 13 Dec 23 00:29 UTC | 13 Dec 23 00:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-628189 --memory=2200 --alsologtostderr   | newest-cni-628189            | jenkins | v1.32.0 | 13 Dec 23 00:29 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/13 00:29:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 00:29:57.174952  184055 out.go:296] Setting OutFile to fd 1 ...
	I1213 00:29:57.175119  184055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:29:57.175135  184055 out.go:309] Setting ErrFile to fd 2...
	I1213 00:29:57.175143  184055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:29:57.175490  184055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1213 00:29:57.176799  184055 out.go:303] Setting JSON to false
	I1213 00:29:57.178221  184055 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11545,"bootTime":1702415852,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 00:29:57.178309  184055 start.go:138] virtualization: kvm guest
	I1213 00:29:57.180620  184055 out.go:177] * [newest-cni-628189] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 00:29:57.183419  184055 out.go:177]   - MINIKUBE_LOCATION=17777
	I1213 00:29:57.183466  184055 notify.go:220] Checking for updates...
	I1213 00:29:57.185130  184055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 00:29:57.187090  184055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:29:57.188603  184055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1213 00:29:57.190161  184055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 00:29:57.191723  184055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 00:29:57.193894  184055 config.go:182] Loaded profile config "newest-cni-628189": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:29:57.194483  184055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:29:57.194572  184055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:29:57.214593  184055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36407
	I1213 00:29:57.215046  184055 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:29:57.215876  184055 main.go:141] libmachine: Using API Version  1
	I1213 00:29:57.215916  184055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:29:57.216339  184055 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:29:57.216605  184055 main.go:141] libmachine: (newest-cni-628189) Calling .DriverName
	I1213 00:29:57.216890  184055 driver.go:392] Setting default libvirt URI to qemu:///system
	I1213 00:29:57.217325  184055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:29:57.217371  184055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:29:57.237105  184055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I1213 00:29:57.237628  184055 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:29:57.238229  184055 main.go:141] libmachine: Using API Version  1
	I1213 00:29:57.238266  184055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:29:57.238618  184055 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:29:57.238862  184055 main.go:141] libmachine: (newest-cni-628189) Calling .DriverName
	I1213 00:29:57.291349  184055 out.go:177] * Using the kvm2 driver based on existing profile
	I1213 00:29:57.293144  184055 start.go:298] selected driver: kvm2
	I1213 00:29:57.293168  184055 start.go:902] validating driver "kvm2" against &{Name:newest-cni-628189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-628189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node
_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:29:57.293338  184055 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 00:29:57.294305  184055 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:29:57.294417  184055 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 00:29:57.315511  184055 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1213 00:29:57.316028  184055 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 00:29:57.316105  184055 cni.go:84] Creating CNI manager for ""
	I1213 00:29:57.316121  184055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:29:57.316138  184055 start_flags.go:323] config:
	{Name:newest-cni-628189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-628189 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:29:57.316365  184055 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:29:57.318401  184055 out.go:177] * Starting control plane node newest-cni-628189 in cluster newest-cni-628189
	I1213 00:29:58.145529  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:29:58.145967  183417 main.go:141] libmachine: (kindnet-120988) DBG | unable to find current IP address of domain kindnet-120988 in network mk-kindnet-120988
	I1213 00:29:58.146030  183417 main.go:141] libmachine: (kindnet-120988) DBG | I1213 00:29:58.145964  183746 retry.go:31] will retry after 4.469395762s: waiting for machine to come up
	I1213 00:29:57.319697  184055 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1213 00:29:57.319744  184055 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1213 00:29:57.319770  184055 cache.go:56] Caching tarball of preloaded images
	I1213 00:29:57.319875  184055 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 00:29:57.319890  184055 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1213 00:29:57.320028  184055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/newest-cni-628189/config.json ...
	I1213 00:29:57.320255  184055 start.go:365] acquiring machines lock for newest-cni-628189: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:30:02.724522  183173 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002017 seconds
	I1213 00:30:02.724697  183173 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:30:02.740600  183173 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:30:03.274348  183173 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:30:03.274583  183173 kubeadm.go:322] [mark-control-plane] Marking the node auto-120988 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 00:30:03.789883  183173 kubeadm.go:322] [bootstrap-token] Using token: rtvudu.9m4mg92p335l8r6w
	I1213 00:30:03.792571  183173 out.go:204]   - Configuring RBAC rules ...
	I1213 00:30:03.792673  183173 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:30:03.802810  183173 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 00:30:03.817209  183173 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:30:03.821193  183173 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:30:03.825674  183173 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:30:03.829724  183173 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:30:03.847995  183173 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 00:30:04.080944  183173 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:30:04.223426  183173 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:30:04.223460  183173 kubeadm.go:322] 
	I1213 00:30:04.223581  183173 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:30:04.223607  183173 kubeadm.go:322] 
	I1213 00:30:04.223723  183173 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:30:04.223733  183173 kubeadm.go:322] 
	I1213 00:30:04.223765  183173 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:30:04.223826  183173 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:30:04.223885  183173 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:30:04.223896  183173 kubeadm.go:322] 
	I1213 00:30:04.223967  183173 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1213 00:30:04.223979  183173 kubeadm.go:322] 
	I1213 00:30:04.224062  183173 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 00:30:04.224073  183173 kubeadm.go:322] 
	I1213 00:30:04.224119  183173 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:30:04.224181  183173 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:30:04.224237  183173 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:30:04.224261  183173 kubeadm.go:322] 
	I1213 00:30:04.224375  183173 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 00:30:04.224489  183173 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:30:04.224502  183173 kubeadm.go:322] 
	I1213 00:30:04.224620  183173 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rtvudu.9m4mg92p335l8r6w \
	I1213 00:30:04.224769  183173 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:30:04.224808  183173 kubeadm.go:322] 	--control-plane 
	I1213 00:30:04.224821  183173 kubeadm.go:322] 
	I1213 00:30:04.224934  183173 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:30:04.224945  183173 kubeadm.go:322] 
	I1213 00:30:04.225061  183173 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rtvudu.9m4mg92p335l8r6w \
	I1213 00:30:04.225203  183173 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:30:04.225498  183173 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:30:04.225539  183173 cni.go:84] Creating CNI manager for ""
	I1213 00:30:04.225564  183173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:30:04.227882  183173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:30:04.229462  183173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:30:04.260867  183173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:30:04.294449  183173 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:30:04.294515  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:04.294534  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=auto-120988 minikube.k8s.io/updated_at=2023_12_13T00_30_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:04.559266  183173 ops.go:34] apiserver oom_adj: -16
	I1213 00:30:04.559338  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:04.647484  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:05.246138  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:05.746017  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:06.246601  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:02.617653  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:02.618187  183417 main.go:141] libmachine: (kindnet-120988) DBG | unable to find current IP address of domain kindnet-120988 in network mk-kindnet-120988
	I1213 00:30:02.618221  183417 main.go:141] libmachine: (kindnet-120988) DBG | I1213 00:30:02.618162  183746 retry.go:31] will retry after 4.400561272s: waiting for machine to come up
	I1213 00:30:08.609409  184055 start.go:369] acquired machines lock for "newest-cni-628189" in 11.289118924s
	I1213 00:30:08.609463  184055 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:30:08.609479  184055 fix.go:54] fixHost starting: 
	I1213 00:30:08.609820  184055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:30:08.609856  184055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:30:08.627127  184055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I1213 00:30:08.627614  184055 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:30:08.628168  184055 main.go:141] libmachine: Using API Version  1
	I1213 00:30:08.628191  184055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:30:08.628579  184055 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:30:08.628818  184055 main.go:141] libmachine: (newest-cni-628189) Calling .DriverName
	I1213 00:30:08.629000  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetState
	I1213 00:30:08.630620  184055 fix.go:102] recreateIfNeeded on newest-cni-628189: state=Stopped err=<nil>
	I1213 00:30:08.630651  184055 main.go:141] libmachine: (newest-cni-628189) Calling .DriverName
	W1213 00:30:08.630795  184055 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:30:08.633213  184055 out.go:177] * Restarting existing kvm2 VM for "newest-cni-628189" ...
	I1213 00:30:07.020303  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.020816  183417 main.go:141] libmachine: (kindnet-120988) Found IP for machine: 192.168.61.213
	I1213 00:30:07.020840  183417 main.go:141] libmachine: (kindnet-120988) Reserving static IP address...
	I1213 00:30:07.020850  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has current primary IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.021249  183417 main.go:141] libmachine: (kindnet-120988) DBG | unable to find host DHCP lease matching {name: "kindnet-120988", mac: "52:54:00:61:7a:87", ip: "192.168.61.213"} in network mk-kindnet-120988
	I1213 00:30:07.097051  183417 main.go:141] libmachine: (kindnet-120988) Reserved static IP address: 192.168.61.213
	I1213 00:30:07.097091  183417 main.go:141] libmachine: (kindnet-120988) DBG | Getting to WaitForSSH function...
	I1213 00:30:07.097114  183417 main.go:141] libmachine: (kindnet-120988) Waiting for SSH to be available...
	I1213 00:30:07.099849  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.100342  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:minikube Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:07.100374  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.100512  183417 main.go:141] libmachine: (kindnet-120988) DBG | Using SSH client type: external
	I1213 00:30:07.100550  183417 main.go:141] libmachine: (kindnet-120988) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/kindnet-120988/id_rsa (-rw-------)
	I1213 00:30:07.100600  183417 main.go:141] libmachine: (kindnet-120988) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/kindnet-120988/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:30:07.100619  183417 main.go:141] libmachine: (kindnet-120988) DBG | About to run SSH command:
	I1213 00:30:07.100636  183417 main.go:141] libmachine: (kindnet-120988) DBG | exit 0
	I1213 00:30:07.196122  183417 main.go:141] libmachine: (kindnet-120988) DBG | SSH cmd err, output: <nil>: 
	I1213 00:30:07.196396  183417 main.go:141] libmachine: (kindnet-120988) KVM machine creation complete!
	I1213 00:30:07.196736  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetConfigRaw
	I1213 00:30:07.197377  183417 main.go:141] libmachine: (kindnet-120988) Calling .DriverName
	I1213 00:30:07.197613  183417 main.go:141] libmachine: (kindnet-120988) Calling .DriverName
	I1213 00:30:07.197760  183417 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1213 00:30:07.197771  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetState
	I1213 00:30:07.199076  183417 main.go:141] libmachine: Detecting operating system of created instance...
	I1213 00:30:07.199094  183417 main.go:141] libmachine: Waiting for SSH to be available...
	I1213 00:30:07.199104  183417 main.go:141] libmachine: Getting to WaitForSSH function...
	I1213 00:30:07.199119  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHHostname
	I1213 00:30:07.201520  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.201915  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:07.201944  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.202027  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHPort
	I1213 00:30:07.202193  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:07.202334  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:07.202500  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHUsername
	I1213 00:30:07.202672  183417 main.go:141] libmachine: Using SSH client type: native
	I1213 00:30:07.203150  183417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.213 22 <nil> <nil>}
	I1213 00:30:07.203165  183417 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1213 00:30:07.332566  183417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:30:07.332599  183417 main.go:141] libmachine: Detecting the provisioner...
	I1213 00:30:07.332612  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHHostname
	I1213 00:30:07.335798  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.336185  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:07.336207  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.336394  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHPort
	I1213 00:30:07.336620  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:07.336806  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:07.336987  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHUsername
	I1213 00:30:07.337185  183417 main.go:141] libmachine: Using SSH client type: native
	I1213 00:30:07.337521  183417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.213 22 <nil> <nil>}
	I1213 00:30:07.337535  183417 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1213 00:30:07.465558  183417 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1213 00:30:07.465628  183417 main.go:141] libmachine: found compatible host: buildroot
	I1213 00:30:07.465636  183417 main.go:141] libmachine: Provisioning with buildroot...
	I1213 00:30:07.465645  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetMachineName
	I1213 00:30:07.465894  183417 buildroot.go:166] provisioning hostname "kindnet-120988"
	I1213 00:30:07.465923  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetMachineName
	I1213 00:30:07.466141  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHHostname
	I1213 00:30:07.469090  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.469487  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:07.469529  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.469708  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHPort
	I1213 00:30:07.469941  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:07.470106  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:07.470284  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHUsername
	I1213 00:30:07.470432  183417 main.go:141] libmachine: Using SSH client type: native
	I1213 00:30:07.470866  183417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.213 22 <nil> <nil>}
	I1213 00:30:07.470883  183417 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-120988 && echo "kindnet-120988" | sudo tee /etc/hostname
	I1213 00:30:07.610079  183417 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-120988
	
	I1213 00:30:07.610112  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHHostname
	I1213 00:30:07.613081  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.613412  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:07.613443  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.613583  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHPort
	I1213 00:30:07.613794  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:07.613950  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:07.614076  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHUsername
	I1213 00:30:07.614213  183417 main.go:141] libmachine: Using SSH client type: native
	I1213 00:30:07.614528  183417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.213 22 <nil> <nil>}
	I1213 00:30:07.614543  183417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-120988' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-120988/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-120988' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:30:07.750723  183417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:30:07.750753  183417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:30:07.750769  183417 buildroot.go:174] setting up certificates
	I1213 00:30:07.750778  183417 provision.go:83] configureAuth start
	I1213 00:30:07.750800  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetMachineName
	I1213 00:30:07.751078  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetIP
	I1213 00:30:07.753910  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.754302  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:07.754332  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.754461  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHHostname
	I1213 00:30:07.756893  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.757226  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:07.757254  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.757390  183417 provision.go:138] copyHostCerts
	I1213 00:30:07.757450  183417 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:30:07.757462  183417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:30:07.757525  183417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:30:07.757675  183417 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:30:07.757690  183417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:30:07.757713  183417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:30:07.757797  183417 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:30:07.757805  183417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:30:07.757823  183417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:30:07.757876  183417 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.kindnet-120988 san=[192.168.61.213 192.168.61.213 localhost 127.0.0.1 minikube kindnet-120988]
	I1213 00:30:07.812995  183417 provision.go:172] copyRemoteCerts
	I1213 00:30:07.813055  183417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:30:07.813078  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHHostname
	I1213 00:30:07.815544  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.815859  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:07.815893  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.816014  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHPort
	I1213 00:30:07.816238  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:07.816455  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHUsername
	I1213 00:30:07.816616  183417 sshutil.go:53] new ssh client: &{IP:192.168.61.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/kindnet-120988/id_rsa Username:docker}
	I1213 00:30:07.909561  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 00:30:07.935292  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 00:30:07.959616  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:30:07.985633  183417 provision.go:86] duration metric: configureAuth took 234.834031ms
	I1213 00:30:07.985657  183417 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:30:07.985835  183417 config.go:182] Loaded profile config "kindnet-120988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:30:07.985910  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHHostname
	I1213 00:30:07.988655  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.988994  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:07.989032  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:07.989211  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHPort
	I1213 00:30:07.989435  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:07.989614  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:07.989758  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHUsername
	I1213 00:30:07.989902  183417 main.go:141] libmachine: Using SSH client type: native
	I1213 00:30:07.990273  183417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.213 22 <nil> <nil>}
	I1213 00:30:07.990290  183417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:30:08.324152  183417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:30:08.324180  183417 main.go:141] libmachine: Checking connection to Docker...
	I1213 00:30:08.324191  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetURL
	I1213 00:30:08.325569  183417 main.go:141] libmachine: (kindnet-120988) DBG | Using libvirt version 6000000
	I1213 00:30:08.328153  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.328563  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:08.328593  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.328806  183417 main.go:141] libmachine: Docker is up and running!
	I1213 00:30:08.328825  183417 main.go:141] libmachine: Reticulating splines...
	I1213 00:30:08.328831  183417 client.go:171] LocalClient.Create took 26.537997696s
	I1213 00:30:08.328851  183417 start.go:167] duration metric: libmachine.API.Create for "kindnet-120988" took 26.538050313s
	I1213 00:30:08.328867  183417 start.go:300] post-start starting for "kindnet-120988" (driver="kvm2")
	I1213 00:30:08.328881  183417 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:30:08.328900  183417 main.go:141] libmachine: (kindnet-120988) Calling .DriverName
	I1213 00:30:08.329182  183417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:30:08.329213  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHHostname
	I1213 00:30:08.331630  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.331984  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:08.332020  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.332174  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHPort
	I1213 00:30:08.332372  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:08.332576  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHUsername
	I1213 00:30:08.332728  183417 sshutil.go:53] new ssh client: &{IP:192.168.61.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/kindnet-120988/id_rsa Username:docker}
	I1213 00:30:08.429826  183417 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:30:08.434240  183417 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:30:08.434265  183417 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:30:08.434328  183417 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:30:08.434402  183417 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:30:08.434487  183417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:30:08.443760  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:30:08.471481  183417 start.go:303] post-start completed in 142.598483ms
	I1213 00:30:08.471525  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetConfigRaw
	I1213 00:30:08.472177  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetIP
	I1213 00:30:08.475093  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.475467  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:08.475487  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.475747  183417 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/config.json ...
	I1213 00:30:08.475993  183417 start.go:128] duration metric: createHost completed in 26.706136072s
	I1213 00:30:08.476021  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHHostname
	I1213 00:30:08.478018  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.478298  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:08.478324  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.478473  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHPort
	I1213 00:30:08.478635  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:08.478806  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:08.478936  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHUsername
	I1213 00:30:08.479139  183417 main.go:141] libmachine: Using SSH client type: native
	I1213 00:30:08.479487  183417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.213 22 <nil> <nil>}
	I1213 00:30:08.479498  183417 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:30:08.609220  183417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702427408.590240021
	
	I1213 00:30:08.609246  183417 fix.go:206] guest clock: 1702427408.590240021
	I1213 00:30:08.609272  183417 fix.go:219] Guest: 2023-12-13 00:30:08.590240021 +0000 UTC Remote: 2023-12-13 00:30:08.476007201 +0000 UTC m=+66.767985209 (delta=114.23282ms)
	I1213 00:30:08.609297  183417 fix.go:190] guest clock delta is within tolerance: 114.23282ms
	I1213 00:30:08.609308  183417 start.go:83] releasing machines lock for "kindnet-120988", held for 26.839597105s
	I1213 00:30:08.609343  183417 main.go:141] libmachine: (kindnet-120988) Calling .DriverName
	I1213 00:30:08.609677  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetIP
	I1213 00:30:08.612552  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.612912  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:08.612944  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.613086  183417 main.go:141] libmachine: (kindnet-120988) Calling .DriverName
	I1213 00:30:08.613580  183417 main.go:141] libmachine: (kindnet-120988) Calling .DriverName
	I1213 00:30:08.613765  183417 main.go:141] libmachine: (kindnet-120988) Calling .DriverName
	I1213 00:30:08.613866  183417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:30:08.613907  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHHostname
	I1213 00:30:08.613983  183417 ssh_runner.go:195] Run: cat /version.json
	I1213 00:30:08.614000  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHHostname
	I1213 00:30:08.616599  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.616819  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.616945  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:08.616968  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.617143  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHPort
	I1213 00:30:08.617242  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:08.617273  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:08.617326  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:08.617478  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHUsername
	I1213 00:30:08.617604  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHPort
	I1213 00:30:08.617619  183417 sshutil.go:53] new ssh client: &{IP:192.168.61.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/kindnet-120988/id_rsa Username:docker}
	I1213 00:30:08.617748  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:08.617875  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHUsername
	I1213 00:30:08.618009  183417 sshutil.go:53] new ssh client: &{IP:192.168.61.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/kindnet-120988/id_rsa Username:docker}
	I1213 00:30:08.710224  183417 ssh_runner.go:195] Run: systemctl --version
	I1213 00:30:08.733014  183417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:30:08.899249  183417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:30:08.905453  183417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:30:08.905525  183417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:30:08.919506  183417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:30:08.919533  183417 start.go:475] detecting cgroup driver to use...
	I1213 00:30:08.919607  183417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:30:08.934363  183417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:30:08.947283  183417 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:30:08.947355  183417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:30:08.959857  183417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:30:08.973900  183417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:30:09.085440  183417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:30:09.213336  183417 docker.go:219] disabling docker service ...
	I1213 00:30:09.213421  183417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:30:09.227779  183417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:30:09.240635  183417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:30:09.367559  183417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:30:09.478678  183417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:30:09.496007  183417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:30:09.517809  183417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:30:09.517891  183417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:30:09.528084  183417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:30:09.528169  183417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:30:09.537918  183417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:30:09.547610  183417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:30:09.556521  183417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:30:09.565843  183417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:30:09.573634  183417 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:30:09.573714  183417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:30:09.587566  183417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:30:09.596626  183417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:30:09.726123  183417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:30:09.945650  183417 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:30:09.945743  183417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:30:09.952207  183417 start.go:543] Will wait 60s for crictl version
	I1213 00:30:09.952275  183417 ssh_runner.go:195] Run: which crictl
	I1213 00:30:09.956300  183417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:30:10.003065  183417 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:30:10.003167  183417 ssh_runner.go:195] Run: crio --version
	I1213 00:30:10.059753  183417 ssh_runner.go:195] Run: crio --version
	I1213 00:30:10.111483  183417 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1213 00:30:06.746363  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:07.246034  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:07.746049  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:08.246933  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:08.746784  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:09.246255  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:09.746678  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:10.245966  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:10.746060  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:11.246702  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:10.113150  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetIP
	I1213 00:30:10.116227  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:10.116591  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:10.116623  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:10.116848  183417 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1213 00:30:10.121553  183417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:30:10.135159  183417 localpath.go:92] copying /home/jenkins/minikube-integration/17777-136241/.minikube/client.crt -> /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/client.crt
	I1213 00:30:10.135299  183417 localpath.go:117] copying /home/jenkins/minikube-integration/17777-136241/.minikube/client.key -> /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/client.key
	I1213 00:30:10.135407  183417 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:30:10.135587  183417 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:30:10.172568  183417 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1213 00:30:10.172653  183417 ssh_runner.go:195] Run: which lz4
	I1213 00:30:10.176920  183417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:30:10.181305  183417 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:30:10.181350  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1213 00:30:08.634785  184055 main.go:141] libmachine: (newest-cni-628189) Calling .Start
	I1213 00:30:08.634955  184055 main.go:141] libmachine: (newest-cni-628189) Ensuring networks are active...
	I1213 00:30:08.635823  184055 main.go:141] libmachine: (newest-cni-628189) Ensuring network default is active
	I1213 00:30:08.636142  184055 main.go:141] libmachine: (newest-cni-628189) Ensuring network mk-newest-cni-628189 is active
	I1213 00:30:08.636531  184055 main.go:141] libmachine: (newest-cni-628189) Getting domain xml...
	I1213 00:30:08.637275  184055 main.go:141] libmachine: (newest-cni-628189) Creating domain...
	I1213 00:30:10.007103  184055 main.go:141] libmachine: (newest-cni-628189) Waiting to get IP...
	I1213 00:30:10.008179  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:10.008678  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:10.008797  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:10.008677  184157 retry.go:31] will retry after 286.524752ms: waiting for machine to come up
	I1213 00:30:10.297348  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:10.297999  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:10.298142  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:10.298087  184157 retry.go:31] will retry after 304.032557ms: waiting for machine to come up
	I1213 00:30:10.603775  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:10.604494  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:10.604554  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:10.604457  184157 retry.go:31] will retry after 430.274433ms: waiting for machine to come up
	I1213 00:30:11.036195  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:11.036830  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:11.036858  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:11.036787  184157 retry.go:31] will retry after 402.272444ms: waiting for machine to come up
	I1213 00:30:11.440316  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:11.440914  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:11.440945  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:11.440880  184157 retry.go:31] will retry after 519.793861ms: waiting for machine to come up
	I1213 00:30:11.962701  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:11.963180  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:11.963241  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:11.963128  184157 retry.go:31] will retry after 746.525345ms: waiting for machine to come up
	I1213 00:30:11.746435  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:12.246786  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:12.746275  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:13.246152  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:13.746803  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:14.246812  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:14.747029  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:15.246485  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:15.757451  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:16.398793  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:12.104959  183417 crio.go:444] Took 1.928072 seconds to copy over tarball
	I1213 00:30:12.105035  183417 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:30:15.732618  183417 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.627543995s)
	I1213 00:30:15.732661  183417 crio.go:451] Took 3.627676 seconds to extract the tarball
	I1213 00:30:15.732672  183417 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:30:15.792376  183417 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:30:15.873687  183417 crio.go:496] all images are preloaded for cri-o runtime.
	I1213 00:30:15.873721  183417 cache_images.go:84] Images are preloaded, skipping loading
	I1213 00:30:15.873850  183417 ssh_runner.go:195] Run: crio config
	I1213 00:30:15.945232  183417 cni.go:84] Creating CNI manager for "kindnet"
	I1213 00:30:15.945263  183417 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:30:15.945284  183417 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.213 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-120988 NodeName:kindnet-120988 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:30:15.945467  183417 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-120988"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:30:15.945546  183417 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kindnet-120988 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:kindnet-120988 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I1213 00:30:15.945598  183417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1213 00:30:15.956809  183417 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:30:15.956905  183417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:30:15.968628  183417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 00:30:15.987685  183417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:30:16.006158  183417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I1213 00:30:16.025902  183417 ssh_runner.go:195] Run: grep 192.168.61.213	control-plane.minikube.internal$ /etc/hosts
	I1213 00:30:16.030307  183417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:30:16.044330  183417 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988 for IP: 192.168.61.213
	I1213 00:30:16.044370  183417 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:30:16.044605  183417 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:30:16.044673  183417 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:30:16.044791  183417 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/client.key
	I1213 00:30:16.044823  183417 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/apiserver.key.30e54a16
	I1213 00:30:16.044842  183417 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/apiserver.crt.30e54a16 with IP's: [192.168.61.213 10.96.0.1 127.0.0.1 10.0.0.1]
	I1213 00:30:16.303127  183417 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/apiserver.crt.30e54a16 ...
	I1213 00:30:16.303156  183417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/apiserver.crt.30e54a16: {Name:mk6aed8f0768c4b8f98fa0aa8e78fb097fb76f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:30:16.303346  183417 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/apiserver.key.30e54a16 ...
	I1213 00:30:16.303368  183417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/apiserver.key.30e54a16: {Name:mk5984dd929ca064ab75c470b34f2df689bfbbaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:30:16.303465  183417 certs.go:337] copying /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/apiserver.crt.30e54a16 -> /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/apiserver.crt
	I1213 00:30:16.303537  183417 certs.go:341] copying /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/apiserver.key.30e54a16 -> /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/apiserver.key
	I1213 00:30:16.303586  183417 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/proxy-client.key
	I1213 00:30:16.303601  183417 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/proxy-client.crt with IP's: []
	I1213 00:30:16.440307  183417 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/proxy-client.crt ...
	I1213 00:30:16.440336  183417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/proxy-client.crt: {Name:mk35a46d748e72978e4060ec42ff38ed4798a8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:30:16.440542  183417 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/proxy-client.key ...
	I1213 00:30:16.440567  183417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/proxy-client.key: {Name:mk468e55bda6e9fcb9f8bc06bcdd6b717e6283ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:30:16.440815  183417 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:30:16.440855  183417 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:30:16.440866  183417 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:30:16.440896  183417 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:30:16.440922  183417 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:30:16.440951  183417 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:30:16.440988  183417 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:30:16.441651  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:30:16.467412  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:30:16.496097  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:30:16.524036  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/kindnet-120988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 00:30:16.551365  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:30:16.577360  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:30:16.603196  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:30:16.627541  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:30:16.652258  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:30:16.680181  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:30:16.705744  183417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:30:16.731989  183417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:30:16.752887  183417 ssh_runner.go:195] Run: openssl version
	I1213 00:30:16.761031  183417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:30:12.711749  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:12.712323  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:12.712369  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:12.712256  184157 retry.go:31] will retry after 889.373292ms: waiting for machine to come up
	I1213 00:30:13.603141  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:13.603637  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:13.603661  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:13.603597  184157 retry.go:31] will retry after 1.351593437s: waiting for machine to come up
	I1213 00:30:14.956807  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:14.957395  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:14.957435  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:14.957320  184157 retry.go:31] will retry after 1.122580036s: waiting for machine to come up
	I1213 00:30:16.081490  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:16.081895  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:16.081928  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:16.081852  184157 retry.go:31] will retry after 2.033764253s: waiting for machine to come up
	I1213 00:30:16.746757  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:17.255035  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:17.746559  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:18.246458  183173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:18.487003  183173 kubeadm.go:1088] duration metric: took 14.192558884s to wait for elevateKubeSystemPrivileges.
	I1213 00:30:18.487042  183173 kubeadm.go:406] StartCluster complete in 28.655881885s
	I1213 00:30:18.487068  183173 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:30:18.487168  183173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:30:18.488474  183173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:30:18.488749  183173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:30:18.488757  183173 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:30:18.488836  183173 addons.go:69] Setting storage-provisioner=true in profile "auto-120988"
	I1213 00:30:18.488845  183173 addons.go:69] Setting default-storageclass=true in profile "auto-120988"
	I1213 00:30:18.488857  183173 addons.go:231] Setting addon storage-provisioner=true in "auto-120988"
	I1213 00:30:18.488864  183173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-120988"
	I1213 00:30:18.488926  183173 host.go:66] Checking if "auto-120988" exists ...
	I1213 00:30:18.488967  183173 config.go:182] Loaded profile config "auto-120988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:30:18.489352  183173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:30:18.489375  183173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:30:18.489414  183173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:30:18.489439  183173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:30:18.509833  183173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41823
	I1213 00:30:18.510044  183173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I1213 00:30:18.510514  183173 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:30:18.510614  183173 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:30:18.511159  183173 main.go:141] libmachine: Using API Version  1
	I1213 00:30:18.511178  183173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:30:18.511635  183173 main.go:141] libmachine: Using API Version  1
	I1213 00:30:18.511653  183173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:30:18.511710  183173 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:30:18.512042  183173 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:30:18.512094  183173 main.go:141] libmachine: (auto-120988) Calling .GetState
	I1213 00:30:18.512681  183173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:30:18.512711  183173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:30:18.515410  183173 addons.go:231] Setting addon default-storageclass=true in "auto-120988"
	I1213 00:30:18.515453  183173 host.go:66] Checking if "auto-120988" exists ...
	I1213 00:30:18.515855  183173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:30:18.515880  183173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:30:18.535172  183173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I1213 00:30:18.535906  183173 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:30:18.536424  183173 main.go:141] libmachine: Using API Version  1
	I1213 00:30:18.536466  183173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:30:18.536917  183173 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:30:18.537172  183173 main.go:141] libmachine: (auto-120988) Calling .GetState
	I1213 00:30:18.538863  183173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I1213 00:30:18.539245  183173 main.go:141] libmachine: (auto-120988) Calling .DriverName
	I1213 00:30:18.539315  183173 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:30:18.541442  183173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:30:18.539898  183173 main.go:141] libmachine: Using API Version  1
	I1213 00:30:18.543003  183173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:30:18.543108  183173 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:30:18.543130  183173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:30:18.543151  183173 main.go:141] libmachine: (auto-120988) Calling .GetSSHHostname
	I1213 00:30:18.543821  183173 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:30:18.544991  183173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:30:18.545023  183173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:30:18.546927  183173 main.go:141] libmachine: (auto-120988) DBG | domain auto-120988 has defined MAC address 52:54:00:ad:5b:4f in network mk-auto-120988
	I1213 00:30:18.547419  183173 main.go:141] libmachine: (auto-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:5b:4f", ip: ""} in network mk-auto-120988: {Iface:virbr2 ExpiryTime:2023-12-13 01:29:33 +0000 UTC Type:0 Mac:52:54:00:ad:5b:4f Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:auto-120988 Clientid:01:52:54:00:ad:5b:4f}
	I1213 00:30:18.547444  183173 main.go:141] libmachine: (auto-120988) DBG | domain auto-120988 has defined IP address 192.168.50.181 and MAC address 52:54:00:ad:5b:4f in network mk-auto-120988
	I1213 00:30:18.547634  183173 main.go:141] libmachine: (auto-120988) Calling .GetSSHPort
	I1213 00:30:18.547832  183173 main.go:141] libmachine: (auto-120988) Calling .GetSSHKeyPath
	I1213 00:30:18.547960  183173 main.go:141] libmachine: (auto-120988) Calling .GetSSHUsername
	I1213 00:30:18.548047  183173 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/auto-120988/id_rsa Username:docker}
	I1213 00:30:18.564813  183173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39245
	I1213 00:30:18.565277  183173 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:30:18.565915  183173 main.go:141] libmachine: Using API Version  1
	I1213 00:30:18.565938  183173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:30:18.566462  183173 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:30:18.566667  183173 main.go:141] libmachine: (auto-120988) Calling .GetState
	I1213 00:30:18.568789  183173 main.go:141] libmachine: (auto-120988) Calling .DriverName
	I1213 00:30:18.569066  183173 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:30:18.569083  183173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:30:18.569102  183173 main.go:141] libmachine: (auto-120988) Calling .GetSSHHostname
	I1213 00:30:18.572280  183173 main.go:141] libmachine: (auto-120988) DBG | domain auto-120988 has defined MAC address 52:54:00:ad:5b:4f in network mk-auto-120988
	I1213 00:30:18.572661  183173 main.go:141] libmachine: (auto-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:5b:4f", ip: ""} in network mk-auto-120988: {Iface:virbr2 ExpiryTime:2023-12-13 01:29:33 +0000 UTC Type:0 Mac:52:54:00:ad:5b:4f Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:auto-120988 Clientid:01:52:54:00:ad:5b:4f}
	I1213 00:30:18.572681  183173 main.go:141] libmachine: (auto-120988) DBG | domain auto-120988 has defined IP address 192.168.50.181 and MAC address 52:54:00:ad:5b:4f in network mk-auto-120988
	I1213 00:30:18.572938  183173 main.go:141] libmachine: (auto-120988) Calling .GetSSHPort
	I1213 00:30:18.573127  183173 main.go:141] libmachine: (auto-120988) Calling .GetSSHKeyPath
	I1213 00:30:18.573275  183173 main.go:141] libmachine: (auto-120988) Calling .GetSSHUsername
	I1213 00:30:18.573405  183173 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/auto-120988/id_rsa Username:docker}
	I1213 00:30:18.589322  183173 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-120988" context rescaled to 1 replicas
	I1213 00:30:18.589360  183173 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:30:18.591099  183173 out.go:177] * Verifying Kubernetes components...
	I1213 00:30:18.592558  183173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:30:18.730227  183173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:30:18.731503  183173 node_ready.go:35] waiting up to 15m0s for node "auto-120988" to be "Ready" ...
	I1213 00:30:18.739607  183173 node_ready.go:49] node "auto-120988" has status "Ready":"True"
	I1213 00:30:18.739634  183173 node_ready.go:38] duration metric: took 8.10326ms waiting for node "auto-120988" to be "Ready" ...
	I1213 00:30:18.739646  183173 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:30:18.752105  183173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:30:18.762673  183173 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace to be "Ready" ...
	I1213 00:30:18.783870  183173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:30:20.256336  183173 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.526062633s)
	I1213 00:30:20.256397  183173 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1213 00:30:20.693402  183173 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.941247538s)
	I1213 00:30:20.693471  183173 main.go:141] libmachine: Making call to close driver server
	I1213 00:30:20.693488  183173 main.go:141] libmachine: (auto-120988) Calling .Close
	I1213 00:30:20.693405  183173 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.909497209s)
	I1213 00:30:20.693568  183173 main.go:141] libmachine: Making call to close driver server
	I1213 00:30:20.693595  183173 main.go:141] libmachine: (auto-120988) Calling .Close
	I1213 00:30:20.693825  183173 main.go:141] libmachine: (auto-120988) DBG | Closing plugin on server side
	I1213 00:30:20.694090  183173 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:30:20.694105  183173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:30:20.694123  183173 main.go:141] libmachine: Making call to close driver server
	I1213 00:30:20.694133  183173 main.go:141] libmachine: (auto-120988) Calling .Close
	I1213 00:30:20.694213  183173 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:30:20.694224  183173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:30:20.694259  183173 main.go:141] libmachine: Making call to close driver server
	I1213 00:30:20.694269  183173 main.go:141] libmachine: (auto-120988) Calling .Close
	I1213 00:30:20.694448  183173 main.go:141] libmachine: (auto-120988) DBG | Closing plugin on server side
	I1213 00:30:20.694485  183173 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:30:20.694495  183173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:30:20.694570  183173 main.go:141] libmachine: (auto-120988) DBG | Closing plugin on server side
	I1213 00:30:20.694661  183173 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:30:20.694678  183173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:30:20.723208  183173 main.go:141] libmachine: Making call to close driver server
	I1213 00:30:20.723241  183173 main.go:141] libmachine: (auto-120988) Calling .Close
	I1213 00:30:20.723557  183173 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:30:20.723578  183173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:30:20.725413  183173 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1213 00:30:20.726843  183173 addons.go:502] enable addons completed in 2.238086382s: enabled=[storage-provisioner default-storageclass]
	I1213 00:30:20.801691  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:16.772921  183417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:30:16.891078  183417 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:30:16.891154  183417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:30:16.898457  183417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:30:16.913823  183417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:30:16.929307  183417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:30:16.936375  183417 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:30:16.936481  183417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:30:16.943537  183417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:30:16.955478  183417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:30:16.971735  183417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:30:16.978695  183417 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:30:16.978784  183417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:30:16.986979  183417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:30:17.002881  183417 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:30:17.009287  183417 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1213 00:30:17.009352  183417 kubeadm.go:404] StartCluster: {Name:kindnet-120988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:kindnet-120988 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:30:17.009488  183417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:30:17.009555  183417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:30:17.052439  183417 cri.go:89] found id: ""
	I1213 00:30:17.052532  183417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:30:17.065186  183417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:30:17.075892  183417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:30:17.086664  183417 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:30:17.086714  183417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 00:30:17.303371  183417 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:30:18.117855  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:18.118511  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:18.118543  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:18.118424  184157 retry.go:31] will retry after 1.826105024s: waiting for machine to come up
	I1213 00:30:19.946706  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:19.947176  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:19.947208  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:19.947144  184157 retry.go:31] will retry after 2.589679638s: waiting for machine to come up
	I1213 00:30:22.802719  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:24.802839  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:22.538845  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:22.539341  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:22.539368  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:22.539294  184157 retry.go:31] will retry after 2.733218353s: waiting for machine to come up
	I1213 00:30:25.273696  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:25.274131  184055 main.go:141] libmachine: (newest-cni-628189) DBG | unable to find current IP address of domain newest-cni-628189 in network mk-newest-cni-628189
	I1213 00:30:25.274155  184055 main.go:141] libmachine: (newest-cni-628189) DBG | I1213 00:30:25.274102  184157 retry.go:31] will retry after 4.724405723s: waiting for machine to come up
	I1213 00:30:29.895825  183417 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1213 00:30:29.895896  183417 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:30:29.895987  183417 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:30:29.896110  183417 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:30:29.896251  183417 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:30:29.896336  183417 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:30:29.897976  183417 out.go:204]   - Generating certificates and keys ...
	I1213 00:30:29.898086  183417 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:30:29.898177  183417 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:30:29.898294  183417 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 00:30:29.898397  183417 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1213 00:30:29.898483  183417 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1213 00:30:29.898552  183417 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1213 00:30:29.898629  183417 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1213 00:30:29.898793  183417 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kindnet-120988 localhost] and IPs [192.168.61.213 127.0.0.1 ::1]
	I1213 00:30:29.898858  183417 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1213 00:30:29.899025  183417 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kindnet-120988 localhost] and IPs [192.168.61.213 127.0.0.1 ::1]
	I1213 00:30:29.899114  183417 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 00:30:29.899201  183417 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 00:30:29.899263  183417 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1213 00:30:29.899349  183417 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:30:29.899423  183417 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:30:29.899486  183417 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:30:29.899571  183417 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:30:29.899663  183417 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:30:29.899763  183417 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:30:29.899887  183417 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:30:29.901587  183417 out.go:204]   - Booting up control plane ...
	I1213 00:30:29.901703  183417 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:30:29.901797  183417 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:30:29.901910  183417 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:30:29.902051  183417 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:30:29.902131  183417 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:30:29.902165  183417 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1213 00:30:29.902305  183417 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:30:29.902397  183417 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505418 seconds
	I1213 00:30:29.902559  183417 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:30:29.902725  183417 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:30:29.902803  183417 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:30:29.903057  183417 kubeadm.go:322] [mark-control-plane] Marking the node kindnet-120988 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 00:30:29.903160  183417 kubeadm.go:322] [bootstrap-token] Using token: eu428c.yxadhbc6fk2636gi
	I1213 00:30:29.904515  183417 out.go:204]   - Configuring RBAC rules ...
	I1213 00:30:29.904638  183417 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:30:29.904756  183417 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 00:30:29.904918  183417 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:30:29.905069  183417 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:30:29.905222  183417 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:30:29.905338  183417 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:30:29.905512  183417 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 00:30:29.905569  183417 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:30:29.905626  183417 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:30:29.905636  183417 kubeadm.go:322] 
	I1213 00:30:29.905705  183417 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:30:29.905714  183417 kubeadm.go:322] 
	I1213 00:30:29.905810  183417 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:30:29.905821  183417 kubeadm.go:322] 
	I1213 00:30:29.905850  183417 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:30:29.905924  183417 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:30:29.905991  183417 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:30:29.906017  183417 kubeadm.go:322] 
	I1213 00:30:29.906083  183417 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1213 00:30:29.906093  183417 kubeadm.go:322] 
	I1213 00:30:29.906177  183417 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 00:30:29.906187  183417 kubeadm.go:322] 
	I1213 00:30:29.906254  183417 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:30:29.906344  183417 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:30:29.906426  183417 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:30:29.906443  183417 kubeadm.go:322] 
	I1213 00:30:29.906544  183417 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 00:30:29.906648  183417 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:30:29.906658  183417 kubeadm.go:322] 
	I1213 00:30:29.906762  183417 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token eu428c.yxadhbc6fk2636gi \
	I1213 00:30:29.906913  183417 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:30:29.906948  183417 kubeadm.go:322] 	--control-plane 
	I1213 00:30:29.906957  183417 kubeadm.go:322] 
	I1213 00:30:29.907060  183417 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:30:29.907073  183417 kubeadm.go:322] 
	I1213 00:30:29.907149  183417 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token eu428c.yxadhbc6fk2636gi \
	I1213 00:30:29.907245  183417 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:30:29.907255  183417 cni.go:84] Creating CNI manager for "kindnet"
	I1213 00:30:29.908971  183417 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1213 00:30:27.301421  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:29.803790  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:29.910381  183417 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 00:30:29.925049  183417 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1213 00:30:29.925087  183417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1213 00:30:29.981130  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 00:30:31.121652  183417 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.140482066s)
	I1213 00:30:31.121710  183417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:30:31.121786  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:31.121858  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=kindnet-120988 minikube.k8s.io/updated_at=2023_12_13T00_30_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:31.369596  183417 ops.go:34] apiserver oom_adj: -16
	I1213 00:30:31.369754  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:31.487499  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:30.000031  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.000649  184055 main.go:141] libmachine: (newest-cni-628189) Found IP for machine: 192.168.39.196
	I1213 00:30:30.000677  184055 main.go:141] libmachine: (newest-cni-628189) Reserving static IP address...
	I1213 00:30:30.000695  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has current primary IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.001180  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "newest-cni-628189", mac: "52:54:00:7f:c4:4e", ip: "192.168.39.196"} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:30.001222  184055 main.go:141] libmachine: (newest-cni-628189) DBG | skip adding static IP to network mk-newest-cni-628189 - found existing host DHCP lease matching {name: "newest-cni-628189", mac: "52:54:00:7f:c4:4e", ip: "192.168.39.196"}
	I1213 00:30:30.001254  184055 main.go:141] libmachine: (newest-cni-628189) Reserved static IP address: 192.168.39.196
	I1213 00:30:30.001268  184055 main.go:141] libmachine: (newest-cni-628189) Waiting for SSH to be available...
	I1213 00:30:30.001280  184055 main.go:141] libmachine: (newest-cni-628189) DBG | Getting to WaitForSSH function...
	I1213 00:30:30.003724  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.004137  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:30.004169  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.004362  184055 main.go:141] libmachine: (newest-cni-628189) DBG | Using SSH client type: external
	I1213 00:30:30.004404  184055 main.go:141] libmachine: (newest-cni-628189) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189/id_rsa (-rw-------)
	I1213 00:30:30.004471  184055 main.go:141] libmachine: (newest-cni-628189) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:30:30.004494  184055 main.go:141] libmachine: (newest-cni-628189) DBG | About to run SSH command:
	I1213 00:30:30.004512  184055 main.go:141] libmachine: (newest-cni-628189) DBG | exit 0
	I1213 00:30:30.104702  184055 main.go:141] libmachine: (newest-cni-628189) DBG | SSH cmd err, output: <nil>: 
	I1213 00:30:30.105098  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetConfigRaw
	I1213 00:30:30.105837  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetIP
	I1213 00:30:30.108871  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.109245  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:30.109270  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.109570  184055 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/newest-cni-628189/config.json ...
	I1213 00:30:30.109791  184055 machine.go:88] provisioning docker machine ...
	I1213 00:30:30.109809  184055 main.go:141] libmachine: (newest-cni-628189) Calling .DriverName
	I1213 00:30:30.110033  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetMachineName
	I1213 00:30:30.110266  184055 buildroot.go:166] provisioning hostname "newest-cni-628189"
	I1213 00:30:30.110290  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetMachineName
	I1213 00:30:30.110480  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHHostname
	I1213 00:30:30.113519  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.113989  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:30.114018  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.114181  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHPort
	I1213 00:30:30.114380  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHKeyPath
	I1213 00:30:30.114573  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHKeyPath
	I1213 00:30:30.114783  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHUsername
	I1213 00:30:30.114979  184055 main.go:141] libmachine: Using SSH client type: native
	I1213 00:30:30.115387  184055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1213 00:30:30.115403  184055 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-628189 && echo "newest-cni-628189" | sudo tee /etc/hostname
	I1213 00:30:30.269659  184055 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-628189
	
	I1213 00:30:30.269742  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHHostname
	I1213 00:30:30.276185  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.276631  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:30.276658  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.276853  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHPort
	I1213 00:30:30.277094  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHKeyPath
	I1213 00:30:30.277275  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHKeyPath
	I1213 00:30:30.277455  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHUsername
	I1213 00:30:30.277622  184055 main.go:141] libmachine: Using SSH client type: native
	I1213 00:30:30.277929  184055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1213 00:30:30.277953  184055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-628189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-628189/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-628189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:30:30.426065  184055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:30:30.426103  184055 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:30:30.426167  184055 buildroot.go:174] setting up certificates
	I1213 00:30:30.426188  184055 provision.go:83] configureAuth start
	I1213 00:30:30.426211  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetMachineName
	I1213 00:30:30.426525  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetIP
	I1213 00:30:30.429303  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.429622  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:30.429652  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.429820  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHHostname
	I1213 00:30:30.432282  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.432683  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:30.432716  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.432875  184055 provision.go:138] copyHostCerts
	I1213 00:30:30.432942  184055 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:30:30.432958  184055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:30:30.433012  184055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:30:30.433130  184055 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:30:30.433141  184055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:30:30.433179  184055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:30:30.433255  184055 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:30:30.433266  184055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:30:30.433293  184055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:30:30.433352  184055 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.newest-cni-628189 san=[192.168.39.196 192.168.39.196 localhost 127.0.0.1 minikube newest-cni-628189]
	I1213 00:30:30.520523  184055 provision.go:172] copyRemoteCerts
	I1213 00:30:30.520616  184055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:30:30.520648  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHHostname
	I1213 00:30:30.523554  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.523938  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:30.523981  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.524170  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHPort
	I1213 00:30:30.524376  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHKeyPath
	I1213 00:30:30.524596  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHUsername
	I1213 00:30:30.524757  184055 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189/id_rsa Username:docker}
	I1213 00:30:30.622478  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:30:30.652043  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:30:30.676447  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1213 00:30:30.700401  184055 provision.go:86] duration metric: configureAuth took 274.191855ms
	I1213 00:30:30.700448  184055 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:30:30.700660  184055 config.go:182] Loaded profile config "newest-cni-628189": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:30:30.700742  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHHostname
	I1213 00:30:30.703642  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.704069  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:30.704111  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:30.704324  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHPort
	I1213 00:30:30.704575  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHKeyPath
	I1213 00:30:30.704754  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHKeyPath
	I1213 00:30:30.704893  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHUsername
	I1213 00:30:30.705053  184055 main.go:141] libmachine: Using SSH client type: native
	I1213 00:30:30.705530  184055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1213 00:30:30.705556  184055 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:30:31.046771  184055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:30:31.046800  184055 machine.go:91] provisioned docker machine in 936.994293ms
	I1213 00:30:31.046811  184055 start.go:300] post-start starting for "newest-cni-628189" (driver="kvm2")
	I1213 00:30:31.046826  184055 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:30:31.046856  184055 main.go:141] libmachine: (newest-cni-628189) Calling .DriverName
	I1213 00:30:31.047210  184055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:30:31.047235  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHHostname
	I1213 00:30:31.050067  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:31.050442  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:31.050467  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:31.050596  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHPort
	I1213 00:30:31.050828  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHKeyPath
	I1213 00:30:31.051032  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHUsername
	I1213 00:30:31.051187  184055 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189/id_rsa Username:docker}
	I1213 00:30:31.151458  184055 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:30:31.157042  184055 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:30:31.157070  184055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:30:31.157138  184055 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:30:31.157232  184055 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:30:31.157360  184055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:30:31.169857  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:30:31.195964  184055 start.go:303] post-start completed in 149.135002ms
	I1213 00:30:31.195989  184055 fix.go:56] fixHost completed within 22.586516619s
	I1213 00:30:31.196014  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHHostname
	I1213 00:30:31.199291  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:31.199667  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:31.199701  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:31.199849  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHPort
	I1213 00:30:31.200042  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHKeyPath
	I1213 00:30:31.200243  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHKeyPath
	I1213 00:30:31.200497  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHUsername
	I1213 00:30:31.200726  184055 main.go:141] libmachine: Using SSH client type: native
	I1213 00:30:31.201043  184055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1213 00:30:31.201054  184055 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:30:31.341456  184055 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702427431.316805100
	
	I1213 00:30:31.341484  184055 fix.go:206] guest clock: 1702427431.316805100
	I1213 00:30:31.341495  184055 fix.go:219] Guest: 2023-12-13 00:30:31.3168051 +0000 UTC Remote: 2023-12-13 00:30:31.195993912 +0000 UTC m=+34.089742275 (delta=120.811188ms)
	I1213 00:30:31.341516  184055 fix.go:190] guest clock delta is within tolerance: 120.811188ms
	I1213 00:30:31.341520  184055 start.go:83] releasing machines lock for "newest-cni-628189", held for 22.732082476s
	I1213 00:30:31.341538  184055 main.go:141] libmachine: (newest-cni-628189) Calling .DriverName
	I1213 00:30:31.341824  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetIP
	I1213 00:30:31.344399  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:31.344878  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:31.344909  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:31.345080  184055 main.go:141] libmachine: (newest-cni-628189) Calling .DriverName
	I1213 00:30:31.345625  184055 main.go:141] libmachine: (newest-cni-628189) Calling .DriverName
	I1213 00:30:31.345834  184055 main.go:141] libmachine: (newest-cni-628189) Calling .DriverName
	I1213 00:30:31.345927  184055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:30:31.345989  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHHostname
	I1213 00:30:31.346061  184055 ssh_runner.go:195] Run: cat /version.json
	I1213 00:30:31.346092  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHHostname
	I1213 00:30:31.348804  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:31.349092  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:31.349126  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:31.349151  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:31.349301  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHPort
	I1213 00:30:31.349484  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHKeyPath
	I1213 00:30:31.349542  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:31.349576  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:31.349684  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHPort
	I1213 00:30:31.349728  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHUsername
	I1213 00:30:31.349868  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHKeyPath
	I1213 00:30:31.349916  184055 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189/id_rsa Username:docker}
	I1213 00:30:31.350046  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetSSHUsername
	I1213 00:30:31.350203  184055 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/newest-cni-628189/id_rsa Username:docker}
	I1213 00:30:31.469082  184055 ssh_runner.go:195] Run: systemctl --version
	I1213 00:30:31.476695  184055 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:30:31.655361  184055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:30:31.662272  184055 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:30:31.662346  184055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:30:31.679421  184055 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:30:31.679444  184055 start.go:475] detecting cgroup driver to use...
	I1213 00:30:31.679503  184055 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:30:31.694704  184055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:30:31.708335  184055 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:30:31.708398  184055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:30:31.722728  184055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:30:31.738125  184055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:30:31.846650  184055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:30:31.976754  184055 docker.go:219] disabling docker service ...
	I1213 00:30:31.976821  184055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:30:31.990942  184055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:30:32.004537  184055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:30:32.110647  184055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:30:32.233345  184055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:30:32.247225  184055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:30:32.264961  184055 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:30:32.265029  184055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:30:32.275654  184055 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:30:32.275725  184055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:30:32.287574  184055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:30:32.300347  184055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:30:32.311195  184055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:30:32.322769  184055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:30:32.332715  184055 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:30:32.332818  184055 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:30:32.346337  184055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:30:32.356550  184055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:30:32.474157  184055 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:30:32.643672  184055 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:30:32.643738  184055 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:30:32.651207  184055 start.go:543] Will wait 60s for crictl version
	I1213 00:30:32.651279  184055 ssh_runner.go:195] Run: which crictl
	I1213 00:30:32.655539  184055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:30:32.701889  184055 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:30:32.701988  184055 ssh_runner.go:195] Run: crio --version
	I1213 00:30:32.747385  184055 ssh_runner.go:195] Run: crio --version
	I1213 00:30:32.794448  184055 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1213 00:30:32.795983  184055 main.go:141] libmachine: (newest-cni-628189) Calling .GetIP
	I1213 00:30:32.799104  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:32.799478  184055 main.go:141] libmachine: (newest-cni-628189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:4e", ip: ""} in network mk-newest-cni-628189: {Iface:virbr1 ExpiryTime:2023-12-13 01:29:09 +0000 UTC Type:0 Mac:52:54:00:7f:c4:4e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:newest-cni-628189 Clientid:01:52:54:00:7f:c4:4e}
	I1213 00:30:32.799509  184055 main.go:141] libmachine: (newest-cni-628189) DBG | domain newest-cni-628189 has defined IP address 192.168.39.196 and MAC address 52:54:00:7f:c4:4e in network mk-newest-cni-628189
	I1213 00:30:32.799674  184055 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 00:30:32.804276  184055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:30:32.818667  184055 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 00:30:32.301097  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:34.799862  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:32.087257  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:32.587182  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:33.087706  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:33.587805  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:34.087813  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:34.587461  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:35.087520  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:35.587146  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:36.086958  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:36.587186  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:32.820234  184055 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1213 00:30:32.820308  184055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:30:32.862448  184055 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1213 00:30:32.862511  184055 ssh_runner.go:195] Run: which lz4
	I1213 00:30:32.866778  184055 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:30:32.871165  184055 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:30:32.871204  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401739178 bytes)
	I1213 00:30:34.456464  184055 crio.go:444] Took 1.589732 seconds to copy over tarball
	I1213 00:30:34.456546  184055 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:30:36.802305  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:39.197234  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:41.299117  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:37.087701  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:37.586813  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:38.087577  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:38.854480  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:39.526525  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:40.109956  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:40.587174  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:41.087438  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:41.587529  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:37.317046  184055 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.860467476s)
	I1213 00:30:37.317072  184055 crio.go:451] Took 2.860580 seconds to extract the tarball
	I1213 00:30:37.317082  184055 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:30:37.357359  184055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:30:37.408249  184055 crio.go:496] all images are preloaded for cri-o runtime.
	I1213 00:30:37.408284  184055 cache_images.go:84] Images are preloaded, skipping loading
	I1213 00:30:37.408372  184055 ssh_runner.go:195] Run: crio config
	I1213 00:30:37.465086  184055 cni.go:84] Creating CNI manager for ""
	I1213 00:30:37.465111  184055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:30:37.465135  184055 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1213 00:30:37.465159  184055 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-628189 NodeName:newest-cni-628189 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:30:37.465317  184055 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-628189"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:30:37.465402  184055 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-628189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-628189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:30:37.465493  184055 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1213 00:30:37.476025  184055 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:30:37.476095  184055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:30:37.486208  184055 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I1213 00:30:37.502878  184055 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1213 00:30:37.518927  184055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1213 00:30:37.535297  184055 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I1213 00:30:37.538959  184055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:30:37.550469  184055 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/newest-cni-628189 for IP: 192.168.39.196
	I1213 00:30:37.550506  184055 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:30:37.550673  184055 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:30:37.550733  184055 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:30:37.550816  184055 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/newest-cni-628189/client.key
	I1213 00:30:37.550894  184055 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/newest-cni-628189/apiserver.key.85aad866
	I1213 00:30:37.550953  184055 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/newest-cni-628189/proxy-client.key
	I1213 00:30:37.551091  184055 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:30:37.551162  184055 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:30:37.551182  184055 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:30:37.551217  184055 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:30:37.551249  184055 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:30:37.551281  184055 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:30:37.551335  184055 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:30:37.552119  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/newest-cni-628189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:30:37.575175  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/newest-cni-628189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:30:37.600080  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/newest-cni-628189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:30:37.625777  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/newest-cni-628189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:30:37.653594  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:30:37.679450  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:30:37.703128  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:30:37.725209  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:30:37.748387  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:30:37.770947  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:30:37.794224  184055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:30:37.819966  184055 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:30:37.837646  184055 ssh_runner.go:195] Run: openssl version
	I1213 00:30:37.843932  184055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:30:37.856315  184055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:30:37.861254  184055 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:30:37.861307  184055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:30:37.867190  184055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:30:37.877336  184055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:30:37.888019  184055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:30:37.892892  184055 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:30:37.892935  184055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:30:37.898590  184055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:30:37.909623  184055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:30:37.921158  184055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:30:37.925954  184055 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:30:37.926019  184055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:30:37.931598  184055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:30:37.943056  184055 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:30:37.949185  184055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:30:37.957015  184055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:30:37.963847  184055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:30:37.970033  184055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:30:37.976521  184055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:30:37.983143  184055 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:30:37.989642  184055 kubeadm.go:404] StartCluster: {Name:newest-cni-628189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-628189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false syste
m_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:30:37.989770  184055 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:30:37.989825  184055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:30:38.036086  184055 cri.go:89] found id: ""
	I1213 00:30:38.036189  184055 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:30:38.046215  184055 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:30:38.046243  184055 kubeadm.go:636] restartCluster start
	I1213 00:30:38.046314  184055 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:30:38.057329  184055 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:38.058062  184055 kubeconfig.go:135] verify returned: extract IP: "newest-cni-628189" does not appear in /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:30:38.058426  184055 kubeconfig.go:146] "newest-cni-628189" context is missing from /home/jenkins/minikube-integration/17777-136241/kubeconfig - will repair!
	I1213 00:30:38.059038  184055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:30:38.167860  184055 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:30:38.177936  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:38.178030  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:38.189295  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:38.189320  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:38.189366  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:38.201658  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:38.702303  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:38.702389  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:38.713629  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:39.202130  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:39.202208  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:39.214814  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:39.702334  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:39.702433  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:39.713766  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:40.202012  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:40.202080  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:40.214875  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:40.702475  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:40.702583  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:40.714746  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:41.202337  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:41.202420  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:41.216202  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:41.702379  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:41.702471  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:41.714810  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:42.087569  183417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:30:42.236681  183417 kubeadm.go:1088] duration metric: took 11.114954744s to wait for elevateKubeSystemPrivileges.
	I1213 00:30:42.236727  183417 kubeadm.go:406] StartCluster complete in 25.227384916s
	I1213 00:30:42.236761  183417 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:30:42.236865  183417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:30:42.238903  183417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:30:42.239189  183417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:30:42.239218  183417 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:30:42.239305  183417 addons.go:69] Setting storage-provisioner=true in profile "kindnet-120988"
	I1213 00:30:42.239326  183417 addons.go:231] Setting addon storage-provisioner=true in "kindnet-120988"
	I1213 00:30:42.239325  183417 addons.go:69] Setting default-storageclass=true in profile "kindnet-120988"
	I1213 00:30:42.239346  183417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-120988"
	I1213 00:30:42.239381  183417 host.go:66] Checking if "kindnet-120988" exists ...
	I1213 00:30:42.239438  183417 config.go:182] Loaded profile config "kindnet-120988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:30:42.239793  183417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:30:42.239834  183417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:30:42.239926  183417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:30:42.239965  183417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:30:42.255729  183417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I1213 00:30:42.256124  183417 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:30:42.256696  183417 main.go:141] libmachine: Using API Version  1
	I1213 00:30:42.256726  183417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:30:42.257125  183417 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:30:42.257678  183417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:30:42.257722  183417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:30:42.258037  183417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I1213 00:30:42.258400  183417 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:30:42.258921  183417 main.go:141] libmachine: Using API Version  1
	I1213 00:30:42.258942  183417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:30:42.259326  183417 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:30:42.259520  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetState
	I1213 00:30:42.263038  183417 addons.go:231] Setting addon default-storageclass=true in "kindnet-120988"
	I1213 00:30:42.263078  183417 host.go:66] Checking if "kindnet-120988" exists ...
	I1213 00:30:42.263479  183417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:30:42.263546  183417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:30:42.274416  183417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33481
	I1213 00:30:42.274909  183417 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:30:42.275392  183417 main.go:141] libmachine: Using API Version  1
	I1213 00:30:42.275418  183417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:30:42.275783  183417 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:30:42.276111  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetState
	I1213 00:30:42.278154  183417 main.go:141] libmachine: (kindnet-120988) Calling .DriverName
	I1213 00:30:42.279980  183417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:30:42.281831  183417 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:30:42.281847  183417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:30:42.281861  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHHostname
	I1213 00:30:42.280062  183417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38817
	I1213 00:30:42.282298  183417 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:30:42.282821  183417 main.go:141] libmachine: Using API Version  1
	I1213 00:30:42.282844  183417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:30:42.283267  183417 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:30:42.283845  183417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:30:42.283889  183417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:30:42.285262  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:42.285768  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:42.285799  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:42.285958  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHPort
	I1213 00:30:42.286168  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:42.286340  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHUsername
	I1213 00:30:42.286469  183417 sshutil.go:53] new ssh client: &{IP:192.168.61.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/kindnet-120988/id_rsa Username:docker}
	I1213 00:30:42.302930  183417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36193
	I1213 00:30:42.304106  183417 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:30:42.304691  183417 main.go:141] libmachine: Using API Version  1
	I1213 00:30:42.304714  183417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:30:42.305043  183417 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:30:42.305206  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetState
	I1213 00:30:42.307218  183417 main.go:141] libmachine: (kindnet-120988) Calling .DriverName
	I1213 00:30:42.307508  183417 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:30:42.307529  183417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:30:42.307548  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHHostname
	I1213 00:30:42.310497  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:42.310941  183417 main.go:141] libmachine: (kindnet-120988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:7a:87", ip: ""} in network mk-kindnet-120988: {Iface:virbr4 ExpiryTime:2023-12-13 01:29:59 +0000 UTC Type:0 Mac:52:54:00:61:7a:87 Iaid: IPaddr:192.168.61.213 Prefix:24 Hostname:kindnet-120988 Clientid:01:52:54:00:61:7a:87}
	I1213 00:30:42.310968  183417 main.go:141] libmachine: (kindnet-120988) DBG | domain kindnet-120988 has defined IP address 192.168.61.213 and MAC address 52:54:00:61:7a:87 in network mk-kindnet-120988
	I1213 00:30:42.311217  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHPort
	I1213 00:30:42.311425  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHKeyPath
	I1213 00:30:42.311621  183417 main.go:141] libmachine: (kindnet-120988) Calling .GetSSHUsername
	I1213 00:30:42.311745  183417 sshutil.go:53] new ssh client: &{IP:192.168.61.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/kindnet-120988/id_rsa Username:docker}
	I1213 00:30:42.319986  183417 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-120988" context rescaled to 1 replicas
	I1213 00:30:42.320020  183417 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.61.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:30:42.321717  183417 out.go:177] * Verifying Kubernetes components...
	I1213 00:30:42.323142  183417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:30:42.506863  183417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:30:42.508106  183417 node_ready.go:35] waiting up to 15m0s for node "kindnet-120988" to be "Ready" ...
	I1213 00:30:42.536893  183417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:30:42.643796  183417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:30:43.202549  183417 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1213 00:30:43.452977  183417 main.go:141] libmachine: Making call to close driver server
	I1213 00:30:43.453020  183417 main.go:141] libmachine: (kindnet-120988) Calling .Close
	I1213 00:30:43.453017  183417 main.go:141] libmachine: Making call to close driver server
	I1213 00:30:43.453043  183417 main.go:141] libmachine: (kindnet-120988) Calling .Close
	I1213 00:30:43.453305  183417 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:30:43.453328  183417 main.go:141] libmachine: (kindnet-120988) DBG | Closing plugin on server side
	I1213 00:30:43.453333  183417 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:30:43.453342  183417 main.go:141] libmachine: Making call to close driver server
	I1213 00:30:43.453351  183417 main.go:141] libmachine: (kindnet-120988) Calling .Close
	I1213 00:30:43.453405  183417 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:30:43.453433  183417 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:30:43.453447  183417 main.go:141] libmachine: Making call to close driver server
	I1213 00:30:43.453458  183417 main.go:141] libmachine: (kindnet-120988) Calling .Close
	I1213 00:30:43.453658  183417 main.go:141] libmachine: (kindnet-120988) DBG | Closing plugin on server side
	I1213 00:30:43.453690  183417 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:30:43.453699  183417 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:30:43.453740  183417 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:30:43.453760  183417 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:30:43.479598  183417 main.go:141] libmachine: Making call to close driver server
	I1213 00:30:43.479628  183417 main.go:141] libmachine: (kindnet-120988) Calling .Close
	I1213 00:30:43.479928  183417 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:30:43.479950  183417 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:30:43.481633  183417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1213 00:30:43.300415  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:45.799762  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:43.482836  183417 addons.go:502] enable addons completed in 1.243627046s: enabled=[storage-provisioner default-storageclass]
	I1213 00:30:44.534810  183417 node_ready.go:58] node "kindnet-120988" has status "Ready":"False"
	I1213 00:30:46.536772  183417 node_ready.go:58] node "kindnet-120988" has status "Ready":"False"
	I1213 00:30:42.202550  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:42.202640  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:42.218316  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:42.701819  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:42.701922  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:42.717413  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:43.201945  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:43.202048  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:43.214789  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:43.702745  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:43.702831  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:43.715470  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:44.201811  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:44.201893  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:44.213378  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:44.702635  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:44.702757  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:44.713578  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:45.202123  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:45.202236  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:45.214287  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:45.702643  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:45.702739  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:45.714481  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:46.201983  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:46.202098  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:46.214266  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:46.702779  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:46.702851  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:46.714296  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:47.800568  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:49.800734  183173 pod_ready.go:102] pod "coredns-5dd5756b68-bp5rz" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:47.091329  183417 node_ready.go:49] node "kindnet-120988" has status "Ready":"True"
	I1213 00:30:47.091358  183417 node_ready.go:38] duration metric: took 4.583230354s waiting for node "kindnet-120988" to be "Ready" ...
	I1213 00:30:47.091372  183417 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:30:47.100684  183417 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-6vjv6" in "kube-system" namespace to be "Ready" ...
	I1213 00:30:49.121721  183417 pod_ready.go:102] pod "coredns-5dd5756b68-6vjv6" in "kube-system" namespace has status "Ready":"False"
	I1213 00:30:49.623345  183417 pod_ready.go:92] pod "coredns-5dd5756b68-6vjv6" in "kube-system" namespace has status "Ready":"True"
	I1213 00:30:49.623383  183417 pod_ready.go:81] duration metric: took 2.522665137s waiting for pod "coredns-5dd5756b68-6vjv6" in "kube-system" namespace to be "Ready" ...
	I1213 00:30:49.623397  183417 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-120988" in "kube-system" namespace to be "Ready" ...
	I1213 00:30:50.151617  183417 pod_ready.go:92] pod "etcd-kindnet-120988" in "kube-system" namespace has status "Ready":"True"
	I1213 00:30:50.151640  183417 pod_ready.go:81] duration metric: took 528.229472ms waiting for pod "etcd-kindnet-120988" in "kube-system" namespace to be "Ready" ...
	I1213 00:30:50.151655  183417 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-120988" in "kube-system" namespace to be "Ready" ...
	I1213 00:30:50.161376  183417 pod_ready.go:92] pod "kube-apiserver-kindnet-120988" in "kube-system" namespace has status "Ready":"True"
	I1213 00:30:50.161401  183417 pod_ready.go:81] duration metric: took 9.739514ms waiting for pod "kube-apiserver-kindnet-120988" in "kube-system" namespace to be "Ready" ...
	I1213 00:30:50.161411  183417 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-120988" in "kube-system" namespace to be "Ready" ...
	I1213 00:30:50.169710  183417 pod_ready.go:92] pod "kube-controller-manager-kindnet-120988" in "kube-system" namespace has status "Ready":"True"
	I1213 00:30:50.169732  183417 pod_ready.go:81] duration metric: took 8.313745ms waiting for pod "kube-controller-manager-kindnet-120988" in "kube-system" namespace to be "Ready" ...
	I1213 00:30:50.169747  183417 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-lkj46" in "kube-system" namespace to be "Ready" ...
	I1213 00:30:50.419293  183417 pod_ready.go:92] pod "kube-proxy-lkj46" in "kube-system" namespace has status "Ready":"True"
	I1213 00:30:50.419319  183417 pod_ready.go:81] duration metric: took 249.564168ms waiting for pod "kube-proxy-lkj46" in "kube-system" namespace to be "Ready" ...
	I1213 00:30:50.419332  183417 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-120988" in "kube-system" namespace to be "Ready" ...
	I1213 00:30:50.819748  183417 pod_ready.go:92] pod "kube-scheduler-kindnet-120988" in "kube-system" namespace has status "Ready":"True"
	I1213 00:30:50.819769  183417 pod_ready.go:81] duration metric: took 400.430016ms waiting for pod "kube-scheduler-kindnet-120988" in "kube-system" namespace to be "Ready" ...
	I1213 00:30:50.819779  183417 pod_ready.go:38] duration metric: took 3.728386216s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:30:50.819793  183417 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:30:50.819839  183417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:30:50.836834  183417 api_server.go:72] duration metric: took 8.516781041s to wait for apiserver process to appear ...
	I1213 00:30:50.836863  183417 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:30:50.836895  183417 api_server.go:253] Checking apiserver healthz at https://192.168.61.213:8443/healthz ...
	I1213 00:30:50.842732  183417 api_server.go:279] https://192.168.61.213:8443/healthz returned 200:
	ok
	I1213 00:30:50.843835  183417 api_server.go:141] control plane version: v1.28.4
	I1213 00:30:50.843854  183417 api_server.go:131] duration metric: took 6.983986ms to wait for apiserver health ...
	I1213 00:30:50.843861  183417 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:30:51.023542  183417 system_pods.go:59] 8 kube-system pods found
	I1213 00:30:51.023588  183417 system_pods.go:61] "coredns-5dd5756b68-6vjv6" [1a198266-72d3-4279-ad87-a80f11cee292] Running
	I1213 00:30:51.023598  183417 system_pods.go:61] "etcd-kindnet-120988" [98c362e4-68bc-402a-beb1-7721816d141b] Running
	I1213 00:30:51.023605  183417 system_pods.go:61] "kindnet-gj5qk" [cd5e2271-699a-42c0-a25c-d12e7ec0aaa0] Running
	I1213 00:30:51.023612  183417 system_pods.go:61] "kube-apiserver-kindnet-120988" [2daf321c-2436-403b-a9f5-a46b91dd5961] Running
	I1213 00:30:51.023619  183417 system_pods.go:61] "kube-controller-manager-kindnet-120988" [2d4c840a-7eae-4e4a-b46e-ac7f83441c45] Running
	I1213 00:30:51.023625  183417 system_pods.go:61] "kube-proxy-lkj46" [969fa770-4aab-4978-a9bb-0ff5b6300a08] Running
	I1213 00:30:51.023631  183417 system_pods.go:61] "kube-scheduler-kindnet-120988" [c195de68-eb4f-4816-a4f7-6716c3b7983f] Running
	I1213 00:30:51.023641  183417 system_pods.go:61] "storage-provisioner" [c446e559-c139-42b1-bae3-94dfa96d4c9f] Running
	I1213 00:30:51.023652  183417 system_pods.go:74] duration metric: took 179.782886ms to wait for pod list to return data ...
	I1213 00:30:51.023666  183417 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:30:51.219926  183417 default_sa.go:45] found service account: "default"
	I1213 00:30:51.219952  183417 default_sa.go:55] duration metric: took 196.275511ms for default service account to be created ...
	I1213 00:30:51.219962  183417 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:30:51.422543  183417 system_pods.go:86] 8 kube-system pods found
	I1213 00:30:51.422577  183417 system_pods.go:89] "coredns-5dd5756b68-6vjv6" [1a198266-72d3-4279-ad87-a80f11cee292] Running
	I1213 00:30:51.422587  183417 system_pods.go:89] "etcd-kindnet-120988" [98c362e4-68bc-402a-beb1-7721816d141b] Running
	I1213 00:30:51.422594  183417 system_pods.go:89] "kindnet-gj5qk" [cd5e2271-699a-42c0-a25c-d12e7ec0aaa0] Running
	I1213 00:30:51.422600  183417 system_pods.go:89] "kube-apiserver-kindnet-120988" [2daf321c-2436-403b-a9f5-a46b91dd5961] Running
	I1213 00:30:51.422607  183417 system_pods.go:89] "kube-controller-manager-kindnet-120988" [2d4c840a-7eae-4e4a-b46e-ac7f83441c45] Running
	I1213 00:30:51.422613  183417 system_pods.go:89] "kube-proxy-lkj46" [969fa770-4aab-4978-a9bb-0ff5b6300a08] Running
	I1213 00:30:51.422619  183417 system_pods.go:89] "kube-scheduler-kindnet-120988" [c195de68-eb4f-4816-a4f7-6716c3b7983f] Running
	I1213 00:30:51.422625  183417 system_pods.go:89] "storage-provisioner" [c446e559-c139-42b1-bae3-94dfa96d4c9f] Running
	I1213 00:30:51.422635  183417 system_pods.go:126] duration metric: took 202.66667ms to wait for k8s-apps to be running ...
	I1213 00:30:51.422648  183417 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:30:51.422705  183417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:30:51.439293  183417 system_svc.go:56] duration metric: took 16.634229ms WaitForService to wait for kubelet.
	I1213 00:30:51.439319  183417 kubeadm.go:581] duration metric: took 9.119270574s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:30:51.439340  183417 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:30:51.619826  183417 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:30:51.619864  183417 node_conditions.go:123] node cpu capacity is 2
	I1213 00:30:51.619875  183417 node_conditions.go:105] duration metric: took 180.528633ms to run NodePressure ...
	I1213 00:30:51.619891  183417 start.go:228] waiting for startup goroutines ...
	I1213 00:30:51.619903  183417 start.go:233] waiting for cluster config update ...
	I1213 00:30:51.619920  183417 start.go:242] writing updated cluster config ...
	I1213 00:30:51.620242  183417 ssh_runner.go:195] Run: rm -f paused
	I1213 00:30:51.683640  183417 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1213 00:30:51.685582  183417 out.go:177] * Done! kubectl is now configured to use "kindnet-120988" cluster and "default" namespace by default
	I1213 00:30:47.201926  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:47.202035  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:47.215272  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:47.701800  184055 api_server.go:166] Checking apiserver status ...
	I1213 00:30:47.701893  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:30:47.712637  184055 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:30:48.178486  184055 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:30:48.178514  184055 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:30:48.178544  184055 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:30:48.178623  184055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:30:48.225751  184055 cri.go:89] found id: ""
	I1213 00:30:48.225840  184055 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:30:48.244155  184055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:30:48.253590  184055 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:30:48.253702  184055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:30:48.263612  184055 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:30:48.263652  184055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:30:48.403770  184055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:30:50.030755  184055 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.626948503s)
	I1213 00:30:50.030804  184055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:30:50.237324  184055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:30:50.339401  184055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:30:50.404796  184055 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:30:50.404891  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:30:50.420254  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:30:50.939825  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:30:51.440025  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:30:51.939241  184055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-13 00:09:14 UTC, ends at Wed 2023-12-13 00:30:53 UTC. --
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.617932259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427453617912233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f0a8607b-9438-40dc-ac87-5d1f0e5aad63 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.618866892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bdec92f2-0f31-486d-a06f-f99055ffe99c name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.618930351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bdec92f2-0f31-486d-a06f-f99055ffe99c name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.619249667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426221357803656,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bb98c21bcff8726228600821eacf235eb2353d7e4e4d8a88630582809c061e,PodSandboxId:ec031568c78d926e873d15c56f30e3012b5c398ea4bbd975f9e90a08e7fb17ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702426199308460147,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3227111a-055e-48bc-abe1-5162c09b58da,},Annotations:map[string]string{io.kubernetes.container.hash: b2e47e3c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad,PodSandboxId:975f035f145ad7a5dbf2062d52fc4129cafc249b2d3be62dd4714336bbd7b535,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426195822454391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ftv9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d9730b-2e6b-4263-a70d-273cf6837f60,},Annotations:map[string]string{io.kubernetes.container.hash: b3e9de0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702426190137027744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41,PodSandboxId:a0fefa8877f017ebeb4bc2b3102f04d8a8344acd0e2440054d81f908ca4858a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426189879010423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zk4wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
20fe8f7-0c1f-4be3-8184-cd3d6cc19a43,},Annotations:map[string]string{io.kubernetes.container.hash: b6cc6198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7,PodSandboxId:c985b7ae69ba37bc8f7025da34682fcb5118ba566c618e7dc4cbed049b75416b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426182958030058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26c02c6afbf1d5b4c9f495af669df65,},An
notations:map[string]string{io.kubernetes.container.hash: 8fc23393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee,PodSandboxId:a568f2ed2f3063586c863022fc223732c8fb2e89aa61194154ac76a0e815ab72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426182646324616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc52e0e7f1079fa01d7df8c37839c50,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673,PodSandboxId:0a36632b4feccd3eca6a32fae7b9eaa141825401eac3db221dbf8f7f131a034b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426182514141514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
72fbc2300f80dc239672c9760e6a959,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1,PodSandboxId:6a049b2f1db3f4f51b1007c573f5e82a975fee935eb69fed92d815cfed5858d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426182170181549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
4b09929ec1461cf7ee413fae258f89e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2ca634,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bdec92f2-0f31-486d-a06f-f99055ffe99c name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.670758247Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8e4530b1-71a3-48b6-b80d-a4055e3eaa57 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.670815441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8e4530b1-71a3-48b6-b80d-a4055e3eaa57 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.672386500Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=73569f42-19bb-471c-a484-1fa134c04ec4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.672797472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427453672784397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=73569f42-19bb-471c-a484-1fa134c04ec4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.673309128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=562f1716-2324-4131-85fe-99dd197b6789 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.673356974Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=562f1716-2324-4131-85fe-99dd197b6789 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.673544872Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426221357803656,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bb98c21bcff8726228600821eacf235eb2353d7e4e4d8a88630582809c061e,PodSandboxId:ec031568c78d926e873d15c56f30e3012b5c398ea4bbd975f9e90a08e7fb17ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702426199308460147,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3227111a-055e-48bc-abe1-5162c09b58da,},Annotations:map[string]string{io.kubernetes.container.hash: b2e47e3c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad,PodSandboxId:975f035f145ad7a5dbf2062d52fc4129cafc249b2d3be62dd4714336bbd7b535,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426195822454391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ftv9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d9730b-2e6b-4263-a70d-273cf6837f60,},Annotations:map[string]string{io.kubernetes.container.hash: b3e9de0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702426190137027744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41,PodSandboxId:a0fefa8877f017ebeb4bc2b3102f04d8a8344acd0e2440054d81f908ca4858a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426189879010423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zk4wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
20fe8f7-0c1f-4be3-8184-cd3d6cc19a43,},Annotations:map[string]string{io.kubernetes.container.hash: b6cc6198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7,PodSandboxId:c985b7ae69ba37bc8f7025da34682fcb5118ba566c618e7dc4cbed049b75416b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426182958030058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26c02c6afbf1d5b4c9f495af669df65,},An
notations:map[string]string{io.kubernetes.container.hash: 8fc23393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee,PodSandboxId:a568f2ed2f3063586c863022fc223732c8fb2e89aa61194154ac76a0e815ab72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426182646324616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc52e0e7f1079fa01d7df8c37839c50,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673,PodSandboxId:0a36632b4feccd3eca6a32fae7b9eaa141825401eac3db221dbf8f7f131a034b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426182514141514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
72fbc2300f80dc239672c9760e6a959,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1,PodSandboxId:6a049b2f1db3f4f51b1007c573f5e82a975fee935eb69fed92d815cfed5858d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426182170181549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
4b09929ec1461cf7ee413fae258f89e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2ca634,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=562f1716-2324-4131-85fe-99dd197b6789 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.716344996Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1a438339-52ea-4271-a71d-ad11aa455bce name=/runtime.v1.RuntimeService/Version
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.716438359Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1a438339-52ea-4271-a71d-ad11aa455bce name=/runtime.v1.RuntimeService/Version
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.718240321Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=824759dc-0d05-4157-897f-53cd558e70f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.719063627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427453719042853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=824759dc-0d05-4157-897f-53cd558e70f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.719913998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4388faa8-8473-4d9e-a58b-e16b191c74ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.719962760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4388faa8-8473-4d9e-a58b-e16b191c74ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.720268655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426221357803656,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bb98c21bcff8726228600821eacf235eb2353d7e4e4d8a88630582809c061e,PodSandboxId:ec031568c78d926e873d15c56f30e3012b5c398ea4bbd975f9e90a08e7fb17ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702426199308460147,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3227111a-055e-48bc-abe1-5162c09b58da,},Annotations:map[string]string{io.kubernetes.container.hash: b2e47e3c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad,PodSandboxId:975f035f145ad7a5dbf2062d52fc4129cafc249b2d3be62dd4714336bbd7b535,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426195822454391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ftv9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d9730b-2e6b-4263-a70d-273cf6837f60,},Annotations:map[string]string{io.kubernetes.container.hash: b3e9de0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702426190137027744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41,PodSandboxId:a0fefa8877f017ebeb4bc2b3102f04d8a8344acd0e2440054d81f908ca4858a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426189879010423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zk4wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
20fe8f7-0c1f-4be3-8184-cd3d6cc19a43,},Annotations:map[string]string{io.kubernetes.container.hash: b6cc6198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7,PodSandboxId:c985b7ae69ba37bc8f7025da34682fcb5118ba566c618e7dc4cbed049b75416b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426182958030058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26c02c6afbf1d5b4c9f495af669df65,},An
notations:map[string]string{io.kubernetes.container.hash: 8fc23393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee,PodSandboxId:a568f2ed2f3063586c863022fc223732c8fb2e89aa61194154ac76a0e815ab72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426182646324616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc52e0e7f1079fa01d7df8c37839c50,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673,PodSandboxId:0a36632b4feccd3eca6a32fae7b9eaa141825401eac3db221dbf8f7f131a034b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426182514141514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
72fbc2300f80dc239672c9760e6a959,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1,PodSandboxId:6a049b2f1db3f4f51b1007c573f5e82a975fee935eb69fed92d815cfed5858d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426182170181549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
4b09929ec1461cf7ee413fae258f89e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2ca634,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4388faa8-8473-4d9e-a58b-e16b191c74ed name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.762886383Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=424c0ce7-ece4-4c2a-b9ef-d64527a924d2 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.762946587Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=424c0ce7-ece4-4c2a-b9ef-d64527a924d2 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.764491264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=dbab0d65-1d4e-4fc4-b534-5f1a602f14ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.764952676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427453764938364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=dbab0d65-1d4e-4fc4-b534-5f1a602f14ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.765505801Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=65966ff4-eec7-44c1-af35-b7eef1a2e496 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.765547223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=65966ff4-eec7-44c1-af35-b7eef1a2e496 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:30:53 default-k8s-diff-port-743278 crio[709]: time="2023-12-13 00:30:53.765811648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426221357803656,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bb98c21bcff8726228600821eacf235eb2353d7e4e4d8a88630582809c061e,PodSandboxId:ec031568c78d926e873d15c56f30e3012b5c398ea4bbd975f9e90a08e7fb17ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702426199308460147,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3227111a-055e-48bc-abe1-5162c09b58da,},Annotations:map[string]string{io.kubernetes.container.hash: b2e47e3c,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad,PodSandboxId:975f035f145ad7a5dbf2062d52fc4129cafc249b2d3be62dd4714336bbd7b535,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702426195822454391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ftv9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d9730b-2e6b-4263-a70d-273cf6837f60,},Annotations:map[string]string{io.kubernetes.container.hash: b3e9de0f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a,PodSandboxId:81a2e8655210ba4fcad23e3b474f12c91c64adc9305aa41c81cf5eab4215dea8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702426190137027744,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d87ee16e-300f-4797-b0be-efc256d0e827,},Annotations:map[string]string{io.kubernetes.container.hash: cfa710aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41,PodSandboxId:a0fefa8877f017ebeb4bc2b3102f04d8a8344acd0e2440054d81f908ca4858a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702426189879010423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zk4wl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
20fe8f7-0c1f-4be3-8184-cd3d6cc19a43,},Annotations:map[string]string{io.kubernetes.container.hash: b6cc6198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7,PodSandboxId:c985b7ae69ba37bc8f7025da34682fcb5118ba566c618e7dc4cbed049b75416b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702426182958030058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26c02c6afbf1d5b4c9f495af669df65,},An
notations:map[string]string{io.kubernetes.container.hash: 8fc23393,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee,PodSandboxId:a568f2ed2f3063586c863022fc223732c8fb2e89aa61194154ac76a0e815ab72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702426182646324616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc52e0e7f1079fa01d7df8c37839c50,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673,PodSandboxId:0a36632b4feccd3eca6a32fae7b9eaa141825401eac3db221dbf8f7f131a034b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702426182514141514,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
72fbc2300f80dc239672c9760e6a959,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1,PodSandboxId:6a049b2f1db3f4f51b1007c573f5e82a975fee935eb69fed92d815cfed5858d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702426182170181549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-743278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
4b09929ec1461cf7ee413fae258f89e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2ca634,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=65966ff4-eec7-44c1-af35-b7eef1a2e496 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c290417afdb45       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   81a2e8655210b       storage-provisioner
	c8bb98c21bcff       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   ec031568c78d9       busybox
	125252879d69a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      20 minutes ago      Running             coredns                   1                   975f035f145ad       coredns-5dd5756b68-ftv9l
	705b27e3bd760       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   81a2e8655210b       storage-provisioner
	545581d8fb2dd       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      21 minutes ago      Running             kube-proxy                1                   a0fefa8877f01       kube-proxy-zk4wl
	fd8469f4d2e98       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      21 minutes ago      Running             etcd                      1                   c985b7ae69ba3       etcd-default-k8s-diff-port-743278
	c94b9bf453ae3       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      21 minutes ago      Running             kube-scheduler            1                   a568f2ed2f306       kube-scheduler-default-k8s-diff-port-743278
	57e6249b6837d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      21 minutes ago      Running             kube-controller-manager   1                   0a36632b4fecc       kube-controller-manager-default-k8s-diff-port-743278
	c4c918252a292       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      21 minutes ago      Running             kube-apiserver            1                   6a049b2f1db3f       kube-apiserver-default-k8s-diff-port-743278
	
	* 
	* ==> coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32823 - 224 "HINFO IN 3325233440478565840.819332321352294558. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015804027s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-743278
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-743278
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=default-k8s-diff-port-743278
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_13T00_01_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Dec 2023 00:01:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-743278
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Dec 2023 00:30:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Dec 2023 00:30:44 +0000   Wed, 13 Dec 2023 00:01:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Dec 2023 00:30:44 +0000   Wed, 13 Dec 2023 00:01:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Dec 2023 00:30:44 +0000   Wed, 13 Dec 2023 00:01:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Dec 2023 00:30:44 +0000   Wed, 13 Dec 2023 00:09:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.144
	  Hostname:    default-k8s-diff-port-743278
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 01ab38812de04a528da538e9dc0b7d5c
	  System UUID:                01ab3881-2de0-4a52-8da5-38e9dc0b7d5c
	  Boot ID:                    9a33e9f0-dbcd-4523-b2ac-2b7554456859
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-ftv9l                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-743278                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-743278             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-743278    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-zk4wl                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-743278             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-6q9jg                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-743278 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-743278 event: Registered Node default-k8s-diff-port-743278 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-743278 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-743278 event: Registered Node default-k8s-diff-port-743278 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec13 00:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075372] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.700866] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.655871] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.163833] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000081] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.650326] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000069] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.307485] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.116687] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.162977] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.131569] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.235780] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +17.473233] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	[Dec13 00:10] kauditd_printk_skb: 29 callbacks suppressed
	
	* 
	* ==> etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] <==
	* {"level":"warn","ts":"2023-12-13T00:29:49.953332Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.6837ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15328437152858821313 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.144\" mod_revision:1547 > success:<request_put:<key:\"/registry/masterleases/192.168.72.144\" value_size:67 lease:6105065116004045503 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.144\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-13T00:29:49.953473Z","caller":"traceutil/trace.go:171","msg":"trace[81793784] linearizableReadLoop","detail":"{readStateIndex:1836; appliedIndex:1835; }","duration":"226.325812ms","start":"2023-12-13T00:29:49.727115Z","end":"2023-12-13T00:29:49.953441Z","steps":["trace[81793784] 'read index received'  (duration: 89.218137ms)","trace[81793784] 'applied index is now lower than readState.Index'  (duration: 137.106151ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-13T00:29:49.953569Z","caller":"traceutil/trace.go:171","msg":"trace[944425173] transaction","detail":"{read_only:false; response_revision:1556; number_of_response:1; }","duration":"255.635103ms","start":"2023-12-13T00:29:49.697921Z","end":"2023-12-13T00:29:49.953556Z","steps":["trace[944425173] 'process raft request'  (duration: 118.471958ms)","trace[944425173] 'compare'  (duration: 136.409585ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-13T00:29:49.954013Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.206378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-13T00:29:49.954104Z","caller":"traceutil/trace.go:171","msg":"trace[1078843614] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; response_count:0; response_revision:1556; }","duration":"108.316924ms","start":"2023-12-13T00:29:49.84577Z","end":"2023-12-13T00:29:49.954087Z","steps":["trace[1078843614] 'agreement among raft nodes before linearized reading'  (duration: 108.074307ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-13T00:29:49.954108Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.003237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2023-12-13T00:29:49.955364Z","caller":"traceutil/trace.go:171","msg":"trace[1730430882] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1556; }","duration":"228.256725ms","start":"2023-12-13T00:29:49.727092Z","end":"2023-12-13T00:29:49.955348Z","steps":["trace[1730430882] 'agreement among raft nodes before linearized reading'  (duration: 226.954719ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-13T00:30:16.384178Z","caller":"traceutil/trace.go:171","msg":"trace[260134891] transaction","detail":"{read_only:false; response_revision:1579; number_of_response:1; }","duration":"186.662497ms","start":"2023-12-13T00:30:16.197485Z","end":"2023-12-13T00:30:16.384148Z","steps":["trace[260134891] 'process raft request'  (duration: 186.510077ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-13T00:30:16.717225Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.791814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-13T00:30:16.71746Z","caller":"traceutil/trace.go:171","msg":"trace[753181931] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1579; }","duration":"107.049978ms","start":"2023-12-13T00:30:16.610381Z","end":"2023-12-13T00:30:16.717431Z","steps":["trace[753181931] 'range keys from in-memory index tree'  (duration: 106.707034ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-13T00:30:39.064177Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15328437152858821555,"retry-timeout":"500ms"}
	{"level":"info","ts":"2023-12-13T00:30:39.18024Z","caller":"traceutil/trace.go:171","msg":"trace[1451921668] linearizableReadLoop","detail":"{readStateIndex:1886; appliedIndex:1885; }","duration":"617.029657ms","start":"2023-12-13T00:30:38.563197Z","end":"2023-12-13T00:30:39.180227Z","steps":["trace[1451921668] 'read index received'  (duration: 616.86522ms)","trace[1451921668] 'applied index is now lower than readState.Index'  (duration: 163.54µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-13T00:30:39.180349Z","caller":"traceutil/trace.go:171","msg":"trace[184263898] transaction","detail":"{read_only:false; response_revision:1596; number_of_response:1; }","duration":"666.218746ms","start":"2023-12-13T00:30:38.514121Z","end":"2023-12-13T00:30:39.18034Z","steps":["trace[184263898] 'process raft request'  (duration: 665.983864ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-13T00:30:39.180436Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-13T00:30:38.514104Z","time spent":"666.264876ms","remote":"127.0.0.1:57350","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1595 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-12-13T00:30:39.180683Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"569.666308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-13T00:30:39.18077Z","caller":"traceutil/trace.go:171","msg":"trace[1880537901] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1596; }","duration":"569.832587ms","start":"2023-12-13T00:30:38.610921Z","end":"2023-12-13T00:30:39.180754Z","steps":["trace[1880537901] 'agreement among raft nodes before linearized reading'  (duration: 569.637451ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-13T00:30:39.180836Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-13T00:30:38.610907Z","time spent":"569.914494ms","remote":"127.0.0.1:57354","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":29,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2023-12-13T00:30:39.180898Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"617.709925ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-13T00:30:39.180965Z","caller":"traceutil/trace.go:171","msg":"trace[212071149] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1596; }","duration":"617.781095ms","start":"2023-12-13T00:30:38.563173Z","end":"2023-12-13T00:30:39.180954Z","steps":["trace[212071149] 'agreement among raft nodes before linearized reading'  (duration: 617.619584ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-13T00:30:39.181016Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-13T00:30:38.56316Z","time spent":"617.837286ms","remote":"127.0.0.1:57356","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":3,"response size":31,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true "}
	{"level":"warn","ts":"2023-12-13T00:30:40.066324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.43397ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15328437152858821563 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.144\" mod_revision:1589 > success:<request_put:<key:\"/registry/masterleases/192.168.72.144\" value_size:67 lease:6105065116004045753 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.144\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-13T00:30:40.066529Z","caller":"traceutil/trace.go:171","msg":"trace[962217700] linearizableReadLoop","detail":"{readStateIndex:1888; appliedIndex:1887; }","duration":"193.462803ms","start":"2023-12-13T00:30:39.873042Z","end":"2023-12-13T00:30:40.066505Z","steps":["trace[962217700] 'read index received'  (duration: 60.71429ms)","trace[962217700] 'applied index is now lower than readState.Index'  (duration: 132.746481ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-13T00:30:40.066726Z","caller":"traceutil/trace.go:171","msg":"trace[1371807226] transaction","detail":"{read_only:false; response_revision:1597; number_of_response:1; }","duration":"259.534768ms","start":"2023-12-13T00:30:39.807172Z","end":"2023-12-13T00:30:40.066707Z","steps":["trace[1371807226] 'process raft request'  (duration: 126.685305ms)","trace[1371807226] 'compare'  (duration: 132.288973ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-13T00:30:40.066906Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.870783ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-13T00:30:40.066967Z","caller":"traceutil/trace.go:171","msg":"trace[357502488] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1597; }","duration":"193.940206ms","start":"2023-12-13T00:30:39.873017Z","end":"2023-12-13T00:30:40.066957Z","steps":["trace[357502488] 'agreement among raft nodes before linearized reading'  (duration: 193.749832ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  00:30:54 up 21 min,  0 users,  load average: 0.30, 0.37, 0.24
	Linux default-k8s-diff-port-743278 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] <==
	* E1213 00:29:48.287521       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:29:48.288441       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1213 00:29:49.288810       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:29:49.289017       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:29:49.289088       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:29:49.288926       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:29:49.289218       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:29:49.290458       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:30:39.182213       1 trace.go:236] Trace[68218640]: "List" accept:application/json, */*,audit-id:be366547-b5a4-4303-aee6-3dc73549aa90,client:192.168.72.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kubernetes-dashboard/pods,user-agent:e2e-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (13-Dec-2023 00:30:38.610) (total time: 571ms):
	Trace[68218640]: ["List(recursive=true) etcd3" audit-id:be366547-b5a4-4303-aee6-3dc73549aa90,key:/pods/kubernetes-dashboard,resourceVersion:,resourceVersionMatch:,limit:0,continue: 571ms (00:30:38.610)]
	Trace[68218640]: [571.916027ms] [571.916027ms] END
	I1213 00:30:39.182551       1 trace.go:236] Trace[956009463]: "Update" accept:application/json, */*,audit-id:60cb8f3b-baaf-4c8e-8b74-27ef0997f7f7,client:192.168.72.144,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (13-Dec-2023 00:30:38.512) (total time: 669ms):
	Trace[956009463]: ["GuaranteedUpdate etcd3" audit-id:60cb8f3b-baaf-4c8e-8b74-27ef0997f7f7,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 668ms (00:30:38.512)
	Trace[956009463]:  ---"Txn call completed" 668ms (00:30:39.181)]
	Trace[956009463]: [669.225146ms] [669.225146ms] END
	I1213 00:30:48.132225       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1213 00:30:49.290116       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:30:49.290285       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:30:49.290384       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:30:49.291245       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:30:49.291325       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:30:49.291404       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] <==
	* I1213 00:25:02.006744       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:25:31.466322       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:25:32.015590       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:26:01.478481       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:26:02.024274       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1213 00:26:10.135103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="266.626µs"
	I1213 00:26:23.135939       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="132.208µs"
	E1213 00:26:31.483678       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:26:32.033129       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:27:01.489507       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:27:02.041661       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:27:31.495698       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:27:32.049827       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:28:01.503716       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:28:02.060457       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:28:31.510386       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:28:32.068338       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:29:01.519762       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:29:02.078474       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:29:31.526436       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:29:32.094835       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:30:01.531661       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:30:02.107989       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:30:31.538258       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:30:32.118529       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] <==
	* I1213 00:09:50.161458       1 server_others.go:69] "Using iptables proxy"
	I1213 00:09:50.182264       1 node.go:141] Successfully retrieved node IP: 192.168.72.144
	I1213 00:09:50.423929       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1213 00:09:50.424018       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 00:09:50.435326       1 server_others.go:152] "Using iptables Proxier"
	I1213 00:09:50.435446       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 00:09:50.435691       1 server.go:846] "Version info" version="v1.28.4"
	I1213 00:09:50.443059       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 00:09:50.447834       1 config.go:188] "Starting service config controller"
	I1213 00:09:50.447984       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 00:09:50.448042       1 config.go:97] "Starting endpoint slice config controller"
	I1213 00:09:50.448081       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 00:09:50.448785       1 config.go:315] "Starting node config controller"
	I1213 00:09:50.448822       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 00:09:50.549786       1 shared_informer.go:318] Caches are synced for node config
	I1213 00:09:50.549839       1 shared_informer.go:318] Caches are synced for service config
	I1213 00:09:50.549936       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] <==
	* I1213 00:09:45.149971       1 serving.go:348] Generated self-signed cert in-memory
	W1213 00:09:48.194725       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 00:09:48.194898       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 00:09:48.194914       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 00:09:48.195013       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 00:09:48.268450       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1213 00:09:48.268548       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 00:09:48.276449       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 00:09:48.276498       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1213 00:09:48.281076       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1213 00:09:48.281175       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1213 00:09:48.377578       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-13 00:09:14 UTC, ends at Wed 2023-12-13 00:30:54 UTC. --
	Dec 13 00:28:26 default-k8s-diff-port-743278 kubelet[914]: E1213 00:28:26.117934     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:28:39 default-k8s-diff-port-743278 kubelet[914]: E1213 00:28:39.118419     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:28:41 default-k8s-diff-port-743278 kubelet[914]: E1213 00:28:41.134519     914 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:28:41 default-k8s-diff-port-743278 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:28:41 default-k8s-diff-port-743278 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:28:41 default-k8s-diff-port-743278 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:28:52 default-k8s-diff-port-743278 kubelet[914]: E1213 00:28:52.119501     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:29:04 default-k8s-diff-port-743278 kubelet[914]: E1213 00:29:04.118325     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:29:17 default-k8s-diff-port-743278 kubelet[914]: E1213 00:29:17.118070     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:29:32 default-k8s-diff-port-743278 kubelet[914]: E1213 00:29:32.117315     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:29:41 default-k8s-diff-port-743278 kubelet[914]: E1213 00:29:41.007471     914 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Dec 13 00:29:41 default-k8s-diff-port-743278 kubelet[914]: E1213 00:29:41.136259     914 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:29:41 default-k8s-diff-port-743278 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:29:41 default-k8s-diff-port-743278 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:29:41 default-k8s-diff-port-743278 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:29:45 default-k8s-diff-port-743278 kubelet[914]: E1213 00:29:45.118385     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:29:56 default-k8s-diff-port-743278 kubelet[914]: E1213 00:29:56.118836     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:30:09 default-k8s-diff-port-743278 kubelet[914]: E1213 00:30:09.119519     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:30:22 default-k8s-diff-port-743278 kubelet[914]: E1213 00:30:22.118825     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:30:33 default-k8s-diff-port-743278 kubelet[914]: E1213 00:30:33.120822     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	Dec 13 00:30:41 default-k8s-diff-port-743278 kubelet[914]: E1213 00:30:41.134930     914 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:30:41 default-k8s-diff-port-743278 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:30:41 default-k8s-diff-port-743278 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:30:41 default-k8s-diff-port-743278 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:30:45 default-k8s-diff-port-743278 kubelet[914]: E1213 00:30:45.119873     914 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6q9jg" podUID="b1849258-4fd1-43a5-b67b-02d8e44acd8b"
	
	* 
	* ==> storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] <==
	* I1213 00:09:50.487844       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 00:10:20.490481       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] <==
	* I1213 00:10:21.485573       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 00:10:21.499559       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 00:10:21.499829       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 00:10:38.907235       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 00:10:38.907582       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7b29af0b-eb3e-4d78-a9af-aaad07e4d87b", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-743278_2c120ac1-e14b-4d75-9fd9-1814f0326f46 became leader
	I1213 00:10:38.907737       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-743278_2c120ac1-e14b-4d75-9fd9-1814f0326f46!
	I1213 00:10:39.007874       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-743278_2c120ac1-e14b-4d75-9fd9-1814f0326f46!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-743278 -n default-k8s-diff-port-743278
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-743278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-6q9jg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-743278 describe pod metrics-server-57f55c9bc5-6q9jg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-743278 describe pod metrics-server-57f55c9bc5-6q9jg: exit status 1 (67.670223ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-6q9jg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-743278 describe pod metrics-server-57f55c9bc5-6q9jg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (457.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (312.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1213 00:24:08.502293  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1213 00:24:27.616509  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1213 00:25:11.804927  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-143586 -n no-preload-143586
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-13 00:28:53.650863309 +0000 UTC m=+5657.207000853
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-143586 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-143586 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.373µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-143586 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143586 -n no-preload-143586
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-143586 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-143586 logs -n 25: (1.320968928s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p pause-042245                                        | pause-042245                 | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343019 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	|         | disable-driver-mounts-343019                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:01 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-508612        | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-335807            | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143586             | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-743278  | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC | 13 Dec 23 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC |                     |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-508612             | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC | 13 Dec 23 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-335807                 | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143586                  | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-743278       | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:28 UTC | 13 Dec 23 00:28 UTC |
	| start   | -p newest-cni-628189 --memory=2200 --alsologtostderr   | newest-cni-628189            | jenkins | v1.32.0 | 13 Dec 23 00:28 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/13 00:28:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 00:28:52.485091  182846 out.go:296] Setting OutFile to fd 1 ...
	I1213 00:28:52.485365  182846 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:28:52.485375  182846 out.go:309] Setting ErrFile to fd 2...
	I1213 00:28:52.485381  182846 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:28:52.485602  182846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1213 00:28:52.486390  182846 out.go:303] Setting JSON to false
	I1213 00:28:52.487537  182846 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11481,"bootTime":1702415852,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 00:28:52.487594  182846 start.go:138] virtualization: kvm guest
	I1213 00:28:52.490026  182846 out.go:177] * [newest-cni-628189] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 00:28:52.491941  182846 out.go:177]   - MINIKUBE_LOCATION=17777
	I1213 00:28:52.491959  182846 notify.go:220] Checking for updates...
	I1213 00:28:52.493688  182846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 00:28:52.495202  182846 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:28:52.496641  182846 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1213 00:28:52.498205  182846 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 00:28:52.499683  182846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 00:28:52.501840  182846 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:28:52.501980  182846 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:28:52.502126  182846 config.go:182] Loaded profile config "no-preload-143586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:28:52.502252  182846 driver.go:392] Setting default libvirt URI to qemu:///system
	I1213 00:28:52.540935  182846 out.go:177] * Using the kvm2 driver based on user configuration
	I1213 00:28:52.542520  182846 start.go:298] selected driver: kvm2
	I1213 00:28:52.542535  182846 start.go:902] validating driver "kvm2" against <nil>
	I1213 00:28:52.542547  182846 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 00:28:52.543279  182846 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:28:52.543348  182846 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 00:28:52.559867  182846 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1213 00:28:52.559916  182846 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1213 00:28:52.559942  182846 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 00:28:52.560138  182846 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 00:28:52.560197  182846 cni.go:84] Creating CNI manager for ""
	I1213 00:28:52.560208  182846 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:28:52.560223  182846 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 00:28:52.560230  182846 start_flags.go:323] config:
	{Name:newest-cni-628189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-628189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:28:52.560365  182846 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:28:52.562265  182846 out.go:177] * Starting control plane node newest-cni-628189 in cluster newest-cni-628189
	I1213 00:28:52.563623  182846 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1213 00:28:52.563660  182846 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1213 00:28:52.563667  182846 cache.go:56] Caching tarball of preloaded images
	I1213 00:28:52.563786  182846 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 00:28:52.563802  182846 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1213 00:28:52.563925  182846 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/newest-cni-628189/config.json ...
	I1213 00:28:52.563950  182846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/newest-cni-628189/config.json: {Name:mkd3f792f7d39641521483e9e698f1a50159a6b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:28:52.564110  182846 start.go:365] acquiring machines lock for newest-cni-628189: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:28:52.564144  182846 start.go:369] acquired machines lock for "newest-cni-628189" in 19.047µs
	I1213 00:28:52.564158  182846 start.go:93] Provisioning new machine with config: &{Name:newest-cni-628189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-628189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:28:52.564215  182846 start.go:125] createHost starting for "" (driver="kvm2")
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-13 00:08:49 UTC, ends at Wed 2023-12-13 00:28:54 UTC. --
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.394839322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427334394827643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=5a84884f-cb09-4791-a425-e13988747099 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.395380576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=77357f15-75a9-44ef-956d-4e43164f70f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.395426377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=77357f15-75a9-44ef-956d-4e43164f70f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.395690764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20d184eec9f33c68b1c58d5ac3d3b42daefb6d790a40ececc0a0a6d73d8bc2e9,PodSandboxId:90f63c23ff82a176fbd5ae9ff4cf9646ae59e08df9e2bfcef103afdf0886a9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702426477398489232,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400b27cc-1713-4201-8097-3e3fd8004690,},Annotations:map[string]string{io.kubernetes.container.hash: 6f17d3a3,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceffe7d16ebcec4c8d253cf076d69fc6a43dfb849cdfbd6706173b65e02d0841,PodSandboxId:a0adcf9f70dca2dc0269cf7117fda0be3f5486f8c0bb61a4a815df1306e8fa74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702426477443508907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8fb8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e237-6a35-4e8f-9731-3f9655eba995,},Annotations:map[string]string{io.kubernetes.container.hash: c36be869,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3334e05facd9a7a9d9ec1b86f6f09eda9fa692c41aab91e3ca1d18a0c2971500,PodSandboxId:c20d2b9dfbb4d5ddc60f715204f68d40502dad7208d7d42ea8f725167d9d0f88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702426475844747005,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsdtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 23a261a4-17d1-4657-8052-02b71055c850,},Annotations:map[string]string{io.kubernetes.container.hash: 6fc22b81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc806049c60f6cc717021f3d310359e630e4bf642716e5c060b2058dc95dbea,PodSandboxId:59eda18ed3fd6ec48948543cafcd1ead163bcd4a2496c527f9422a1afe915f0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702426453649500376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
57ed38c59bb1eeeff2f421f353e43eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c70296c970b0afef07e25c03aa1cffa14aaaafbb70386e1226b02bc905fcfa,PodSandboxId:3228cefd0f55d3b94196b31ba7f566c849fa4a8dddd142a7563fc340ac25d103,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702426453596147554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 093c0e4cf8564d4cddfc63bf4c87f834,},Annotations:map
[string]string{io.kubernetes.container.hash: 73904251,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fdf95a89e82f9f27a55d5d118ca6097b4297a514064f52e1fff695f64c0dc6,PodSandboxId:672d6c90cedad4e30e5a1cbe87835b947fce4c90edd9d626368f25b0b6dcf1d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702426453267570927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8521eaa16e6be6fa4a18a323
13c16e0,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e7ea689cef4571673b59e61ef365d3b27d8c9e7a8b4cd6c2d27dc6869a3f35,PodSandboxId:b8c5203ff5e0bb437279f34785e33ca04f300e589eeab01c2f5a1c4ca07bcfef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702426453138159365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6f20bf7e0f43f404e4c0b920dda2219,},A
nnotations:map[string]string{io.kubernetes.container.hash: a4127c24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=77357f15-75a9-44ef-956d-4e43164f70f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.450382966Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b81e9f6e-4cf4-4e2c-9e06-45408baea5ec name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.450464381Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b81e9f6e-4cf4-4e2c-9e06-45408baea5ec name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.452492736Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a452fb18-539e-4ca3-aba5-f2e4fd16a90e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.452956346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427334452935891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=a452fb18-539e-4ca3-aba5-f2e4fd16a90e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.454171990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dd765a14-789d-4d9d-9134-266d2f749f28 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.454247444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dd765a14-789d-4d9d-9134-266d2f749f28 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.454533901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20d184eec9f33c68b1c58d5ac3d3b42daefb6d790a40ececc0a0a6d73d8bc2e9,PodSandboxId:90f63c23ff82a176fbd5ae9ff4cf9646ae59e08df9e2bfcef103afdf0886a9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702426477398489232,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400b27cc-1713-4201-8097-3e3fd8004690,},Annotations:map[string]string{io.kubernetes.container.hash: 6f17d3a3,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceffe7d16ebcec4c8d253cf076d69fc6a43dfb849cdfbd6706173b65e02d0841,PodSandboxId:a0adcf9f70dca2dc0269cf7117fda0be3f5486f8c0bb61a4a815df1306e8fa74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702426477443508907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8fb8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e237-6a35-4e8f-9731-3f9655eba995,},Annotations:map[string]string{io.kubernetes.container.hash: c36be869,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3334e05facd9a7a9d9ec1b86f6f09eda9fa692c41aab91e3ca1d18a0c2971500,PodSandboxId:c20d2b9dfbb4d5ddc60f715204f68d40502dad7208d7d42ea8f725167d9d0f88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702426475844747005,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsdtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 23a261a4-17d1-4657-8052-02b71055c850,},Annotations:map[string]string{io.kubernetes.container.hash: 6fc22b81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc806049c60f6cc717021f3d310359e630e4bf642716e5c060b2058dc95dbea,PodSandboxId:59eda18ed3fd6ec48948543cafcd1ead163bcd4a2496c527f9422a1afe915f0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702426453649500376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
57ed38c59bb1eeeff2f421f353e43eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c70296c970b0afef07e25c03aa1cffa14aaaafbb70386e1226b02bc905fcfa,PodSandboxId:3228cefd0f55d3b94196b31ba7f566c849fa4a8dddd142a7563fc340ac25d103,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702426453596147554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 093c0e4cf8564d4cddfc63bf4c87f834,},Annotations:map
[string]string{io.kubernetes.container.hash: 73904251,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fdf95a89e82f9f27a55d5d118ca6097b4297a514064f52e1fff695f64c0dc6,PodSandboxId:672d6c90cedad4e30e5a1cbe87835b947fce4c90edd9d626368f25b0b6dcf1d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702426453267570927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8521eaa16e6be6fa4a18a323
13c16e0,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e7ea689cef4571673b59e61ef365d3b27d8c9e7a8b4cd6c2d27dc6869a3f35,PodSandboxId:b8c5203ff5e0bb437279f34785e33ca04f300e589eeab01c2f5a1c4ca07bcfef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702426453138159365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6f20bf7e0f43f404e4c0b920dda2219,},A
nnotations:map[string]string{io.kubernetes.container.hash: a4127c24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dd765a14-789d-4d9d-9134-266d2f749f28 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.505315495Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4cdcb2fe-6db8-42ad-b27e-20f06bc6ec57 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.505451588Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4cdcb2fe-6db8-42ad-b27e-20f06bc6ec57 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.511740678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=118d6466-ceda-4fa9-a3fa-c3d7576c2b13 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.512299901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427334512279337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=118d6466-ceda-4fa9-a3fa-c3d7576c2b13 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.516490709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e278616d-2c04-4d0c-a9cb-ee84f403caca name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.516746935Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e278616d-2c04-4d0c-a9cb-ee84f403caca name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.517069387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20d184eec9f33c68b1c58d5ac3d3b42daefb6d790a40ececc0a0a6d73d8bc2e9,PodSandboxId:90f63c23ff82a176fbd5ae9ff4cf9646ae59e08df9e2bfcef103afdf0886a9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702426477398489232,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400b27cc-1713-4201-8097-3e3fd8004690,},Annotations:map[string]string{io.kubernetes.container.hash: 6f17d3a3,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceffe7d16ebcec4c8d253cf076d69fc6a43dfb849cdfbd6706173b65e02d0841,PodSandboxId:a0adcf9f70dca2dc0269cf7117fda0be3f5486f8c0bb61a4a815df1306e8fa74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702426477443508907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8fb8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e237-6a35-4e8f-9731-3f9655eba995,},Annotations:map[string]string{io.kubernetes.container.hash: c36be869,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3334e05facd9a7a9d9ec1b86f6f09eda9fa692c41aab91e3ca1d18a0c2971500,PodSandboxId:c20d2b9dfbb4d5ddc60f715204f68d40502dad7208d7d42ea8f725167d9d0f88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702426475844747005,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsdtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 23a261a4-17d1-4657-8052-02b71055c850,},Annotations:map[string]string{io.kubernetes.container.hash: 6fc22b81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc806049c60f6cc717021f3d310359e630e4bf642716e5c060b2058dc95dbea,PodSandboxId:59eda18ed3fd6ec48948543cafcd1ead163bcd4a2496c527f9422a1afe915f0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702426453649500376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
57ed38c59bb1eeeff2f421f353e43eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c70296c970b0afef07e25c03aa1cffa14aaaafbb70386e1226b02bc905fcfa,PodSandboxId:3228cefd0f55d3b94196b31ba7f566c849fa4a8dddd142a7563fc340ac25d103,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702426453596147554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 093c0e4cf8564d4cddfc63bf4c87f834,},Annotations:map
[string]string{io.kubernetes.container.hash: 73904251,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fdf95a89e82f9f27a55d5d118ca6097b4297a514064f52e1fff695f64c0dc6,PodSandboxId:672d6c90cedad4e30e5a1cbe87835b947fce4c90edd9d626368f25b0b6dcf1d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702426453267570927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8521eaa16e6be6fa4a18a323
13c16e0,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e7ea689cef4571673b59e61ef365d3b27d8c9e7a8b4cd6c2d27dc6869a3f35,PodSandboxId:b8c5203ff5e0bb437279f34785e33ca04f300e589eeab01c2f5a1c4ca07bcfef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702426453138159365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6f20bf7e0f43f404e4c0b920dda2219,},A
nnotations:map[string]string{io.kubernetes.container.hash: a4127c24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e278616d-2c04-4d0c-a9cb-ee84f403caca name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.558443784Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=919e86c1-e38e-419b-ab71-5ad358945959 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.558503230Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=919e86c1-e38e-419b-ab71-5ad358945959 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.559680115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e001ec57-5239-4b3c-877e-8a8e9d935035 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.559979046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427334559968952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=e001ec57-5239-4b3c-877e-8a8e9d935035 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.561077161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=28921e90-6817-4e54-a437-f020e91e8b74 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.561159093Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=28921e90-6817-4e54-a437-f020e91e8b74 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:54 no-preload-143586 crio[734]: time="2023-12-13 00:28:54.561316051Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20d184eec9f33c68b1c58d5ac3d3b42daefb6d790a40ececc0a0a6d73d8bc2e9,PodSandboxId:90f63c23ff82a176fbd5ae9ff4cf9646ae59e08df9e2bfcef103afdf0886a9a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702426477398489232,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 400b27cc-1713-4201-8097-3e3fd8004690,},Annotations:map[string]string{io.kubernetes.container.hash: 6f17d3a3,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceffe7d16ebcec4c8d253cf076d69fc6a43dfb849cdfbd6706173b65e02d0841,PodSandboxId:a0adcf9f70dca2dc0269cf7117fda0be3f5486f8c0bb61a4a815df1306e8fa74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702426477443508907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8fb8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e237-6a35-4e8f-9731-3f9655eba995,},Annotations:map[string]string{io.kubernetes.container.hash: c36be869,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3334e05facd9a7a9d9ec1b86f6f09eda9fa692c41aab91e3ca1d18a0c2971500,PodSandboxId:c20d2b9dfbb4d5ddc60f715204f68d40502dad7208d7d42ea8f725167d9d0f88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702426475844747005,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsdtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 23a261a4-17d1-4657-8052-02b71055c850,},Annotations:map[string]string{io.kubernetes.container.hash: 6fc22b81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adc806049c60f6cc717021f3d310359e630e4bf642716e5c060b2058dc95dbea,PodSandboxId:59eda18ed3fd6ec48948543cafcd1ead163bcd4a2496c527f9422a1afe915f0b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702426453649500376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
57ed38c59bb1eeeff2f421f353e43eb4,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81c70296c970b0afef07e25c03aa1cffa14aaaafbb70386e1226b02bc905fcfa,PodSandboxId:3228cefd0f55d3b94196b31ba7f566c849fa4a8dddd142a7563fc340ac25d103,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702426453596147554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 093c0e4cf8564d4cddfc63bf4c87f834,},Annotations:map
[string]string{io.kubernetes.container.hash: 73904251,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00fdf95a89e82f9f27a55d5d118ca6097b4297a514064f52e1fff695f64c0dc6,PodSandboxId:672d6c90cedad4e30e5a1cbe87835b947fce4c90edd9d626368f25b0b6dcf1d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702426453267570927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8521eaa16e6be6fa4a18a323
13c16e0,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e7ea689cef4571673b59e61ef365d3b27d8c9e7a8b4cd6c2d27dc6869a3f35,PodSandboxId:b8c5203ff5e0bb437279f34785e33ca04f300e589eeab01c2f5a1c4ca07bcfef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702426453138159365,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143586,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6f20bf7e0f43f404e4c0b920dda2219,},A
nnotations:map[string]string{io.kubernetes.container.hash: a4127c24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=28921e90-6817-4e54-a437-f020e91e8b74 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ceffe7d16ebce       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   a0adcf9f70dca       coredns-76f75df574-8fb8b
	20d184eec9f33       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   90f63c23ff82a       storage-provisioner
	3334e05facd9a       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   14 minutes ago      Running             kube-proxy                0                   c20d2b9dfbb4d       kube-proxy-xsdtr
	adc806049c60f       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   14 minutes ago      Running             kube-scheduler            2                   59eda18ed3fd6       kube-scheduler-no-preload-143586
	81c70296c970b       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   14 minutes ago      Running             etcd                      2                   3228cefd0f55d       etcd-no-preload-143586
	00fdf95a89e82       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   14 minutes ago      Running             kube-controller-manager   2                   672d6c90cedad       kube-controller-manager-no-preload-143586
	55e7ea689cef4       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   14 minutes ago      Running             kube-apiserver            2                   b8c5203ff5e0b       kube-apiserver-no-preload-143586
	
	* 
	* ==> coredns [ceffe7d16ebcec4c8d253cf076d69fc6a43dfb849cdfbd6706173b65e02d0841] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60715 - 30942 "HINFO IN 339424797621679506.3135540895672571054. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014554846s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-143586
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-143586
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=no-preload-143586
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_13T00_14_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Dec 2023 00:14:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-143586
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 13 Dec 2023 00:28:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Dec 2023 00:24:54 +0000   Wed, 13 Dec 2023 00:14:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Dec 2023 00:24:54 +0000   Wed, 13 Dec 2023 00:14:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Dec 2023 00:24:54 +0000   Wed, 13 Dec 2023 00:14:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Dec 2023 00:24:54 +0000   Wed, 13 Dec 2023 00:14:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.181
	  Hostname:    no-preload-143586
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb85d1675f224cc781a112e54bad3e44
	  System UUID:                bb85d167-5f22-4cc7-81a1-12e54bad3e44
	  Boot ID:                    9f621f45-b0f5-4147-b31b-e0050ecf5f7e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-8fb8b                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-143586                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-143586             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-143586    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-xsdtr                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-143586             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-q7v45              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-143586 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-143586 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-143586 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m   kubelet          Node no-preload-143586 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m   kubelet          Node no-preload-143586 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-143586 event: Registered Node no-preload-143586 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec13 00:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069424] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.487979] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.521211] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150081] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.459502] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec13 00:09] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.129537] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.148990] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[  +0.104166] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[  +0.223552] systemd-fstab-generator[720]: Ignoring "noauto" for root device
	[ +29.768468] systemd-fstab-generator[1347]: Ignoring "noauto" for root device
	[ +14.547022] hrtimer: interrupt took 5637904 ns
	[  +5.638925] kauditd_printk_skb: 29 callbacks suppressed
	[Dec13 00:14] systemd-fstab-generator[3994]: Ignoring "noauto" for root device
	[  +9.302037] systemd-fstab-generator[4324]: Ignoring "noauto" for root device
	[ +15.695625] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [81c70296c970b0afef07e25c03aa1cffa14aaaafbb70386e1226b02bc905fcfa] <==
	* {"level":"info","ts":"2023-12-13T00:14:15.609456Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.181:2380"}
	{"level":"info","ts":"2023-12-13T00:14:15.60896Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-13T00:14:15.615077Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"86051bbfebcbb1c3","initial-advertise-peer-urls":["https://192.168.50.181:2380"],"listen-peer-urls":["https://192.168.50.181:2380"],"advertise-client-urls":["https://192.168.50.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-13T00:14:15.615311Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-13T00:14:15.946487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86051bbfebcbb1c3 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-13T00:14:15.94655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86051bbfebcbb1c3 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-13T00:14:15.946576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86051bbfebcbb1c3 received MsgPreVoteResp from 86051bbfebcbb1c3 at term 1"}
	{"level":"info","ts":"2023-12-13T00:14:15.946587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86051bbfebcbb1c3 became candidate at term 2"}
	{"level":"info","ts":"2023-12-13T00:14:15.946593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86051bbfebcbb1c3 received MsgVoteResp from 86051bbfebcbb1c3 at term 2"}
	{"level":"info","ts":"2023-12-13T00:14:15.946604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86051bbfebcbb1c3 became leader at term 2"}
	{"level":"info","ts":"2023-12-13T00:14:15.946612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 86051bbfebcbb1c3 elected leader 86051bbfebcbb1c3 at term 2"}
	{"level":"info","ts":"2023-12-13T00:14:15.948126Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"86051bbfebcbb1c3","local-member-attributes":"{Name:no-preload-143586 ClientURLs:[https://192.168.50.181:2379]}","request-path":"/0/members/86051bbfebcbb1c3/attributes","cluster-id":"8eb1120df9352a4b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-13T00:14:15.948192Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-13T00:14:15.948512Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:14:15.948707Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-13T00:14:15.950842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.181:2379"}
	{"level":"info","ts":"2023-12-13T00:14:15.951359Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8eb1120df9352a4b","local-member-id":"86051bbfebcbb1c3","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:14:15.955197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-13T00:14:15.955327Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:14:15.955396Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-13T00:14:15.951433Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-13T00:14:15.955441Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-13T00:24:16.001545Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2023-12-13T00:24:16.003904Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":682,"took":"1.720943ms","hash":3723349647}
	{"level":"info","ts":"2023-12-13T00:24:16.004052Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3723349647,"revision":682,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  00:28:54 up 20 min,  0 users,  load average: 0.38, 0.46, 0.37
	Linux no-preload-143586 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [55e7ea689cef4571673b59e61ef365d3b27d8c9e7a8b4cd6c2d27dc6869a3f35] <==
	* I1213 00:22:18.573633       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:24:17.575770       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:24:17.576193       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1213 00:24:18.576660       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:24:18.576767       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:24:18.576813       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:24:18.576881       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:24:18.576943       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:24:18.578142       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:25:18.577912       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:25:18.578063       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:25:18.578074       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:25:18.579131       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:25:18.579263       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:25:18.579295       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:27:18.578779       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:27:18.579269       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1213 00:27:18.579308       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 00:27:18.579505       1 handler_proxy.go:93] no RequestInfo found in the context
	E1213 00:27:18.579574       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:27:18.581362       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [00fdf95a89e82f9f27a55d5d118ca6097b4297a514064f52e1fff695f64c0dc6] <==
	* I1213 00:23:04.729109       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:23:34.257283       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:23:34.742757       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:24:04.263084       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:24:04.752177       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:24:34.269183       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:24:34.760682       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:25:04.275484       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:25:04.769251       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:25:34.281663       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:25:34.778739       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1213 00:25:47.996424       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="420.038µs"
	I1213 00:26:01.991338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="149.419µs"
	E1213 00:26:04.287383       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:26:04.786852       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:26:34.293057       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:26:34.795731       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:27:04.299384       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:27:04.805459       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:27:34.305438       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:27:34.815117       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:28:04.310792       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:28:04.825394       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1213 00:28:34.317933       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1213 00:28:34.835338       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [3334e05facd9a7a9d9ec1b86f6f09eda9fa692c41aab91e3ca1d18a0c2971500] <==
	* I1213 00:14:36.334979       1 server_others.go:72] "Using iptables proxy"
	I1213 00:14:36.358938       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.181"]
	I1213 00:14:37.133745       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1213 00:14:37.133799       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 00:14:37.133814       1 server_others.go:168] "Using iptables Proxier"
	I1213 00:14:37.182252       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 00:14:37.182623       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I1213 00:14:37.182710       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 00:14:37.187986       1 config.go:188] "Starting service config controller"
	I1213 00:14:37.189570       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 00:14:37.189982       1 config.go:97] "Starting endpoint slice config controller"
	I1213 00:14:37.194743       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 00:14:37.190947       1 config.go:315] "Starting node config controller"
	I1213 00:14:37.197376       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 00:14:37.197679       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1213 00:14:37.291344       1 shared_informer.go:318] Caches are synced for service config
	I1213 00:14:37.297509       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [adc806049c60f6cc717021f3d310359e630e4bf642716e5c060b2058dc95dbea] <==
	* W1213 00:14:17.578194       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 00:14:17.578238       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1213 00:14:18.388401       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 00:14:18.388469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1213 00:14:18.445484       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 00:14:18.445537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1213 00:14:18.588535       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1213 00:14:18.588595       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1213 00:14:18.658206       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1213 00:14:18.658297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1213 00:14:18.763798       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 00:14:18.763929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1213 00:14:18.774178       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 00:14:18.774265       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 00:14:18.775325       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 00:14:18.775463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1213 00:14:18.790757       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 00:14:18.790831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1213 00:14:18.806847       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 00:14:18.806929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1213 00:14:18.834704       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 00:14:18.834782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1213 00:14:18.860057       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 00:14:18.860110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1213 00:14:20.846751       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-13 00:08:49 UTC, ends at Wed 2023-12-13 00:28:55 UTC. --
	Dec 13 00:26:16 no-preload-143586 kubelet[4331]: E1213 00:26:16.974883    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:26:21 no-preload-143586 kubelet[4331]: E1213 00:26:21.077394    4331 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:26:21 no-preload-143586 kubelet[4331]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:26:21 no-preload-143586 kubelet[4331]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:26:21 no-preload-143586 kubelet[4331]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:26:29 no-preload-143586 kubelet[4331]: E1213 00:26:29.974410    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:26:40 no-preload-143586 kubelet[4331]: E1213 00:26:40.975135    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:26:51 no-preload-143586 kubelet[4331]: E1213 00:26:51.975239    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:27:04 no-preload-143586 kubelet[4331]: E1213 00:27:04.974811    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:27:15 no-preload-143586 kubelet[4331]: E1213 00:27:15.974547    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:27:21 no-preload-143586 kubelet[4331]: E1213 00:27:21.076811    4331 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:27:21 no-preload-143586 kubelet[4331]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:27:21 no-preload-143586 kubelet[4331]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:27:21 no-preload-143586 kubelet[4331]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:27:29 no-preload-143586 kubelet[4331]: E1213 00:27:29.974670    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:27:42 no-preload-143586 kubelet[4331]: E1213 00:27:42.976092    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:27:57 no-preload-143586 kubelet[4331]: E1213 00:27:57.974878    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:28:08 no-preload-143586 kubelet[4331]: E1213 00:28:08.977299    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:28:21 no-preload-143586 kubelet[4331]: E1213 00:28:21.078816    4331 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 13 00:28:21 no-preload-143586 kubelet[4331]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 13 00:28:21 no-preload-143586 kubelet[4331]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 13 00:28:21 no-preload-143586 kubelet[4331]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 13 00:28:23 no-preload-143586 kubelet[4331]: E1213 00:28:23.974745    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:28:37 no-preload-143586 kubelet[4331]: E1213 00:28:37.975187    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	Dec 13 00:28:50 no-preload-143586 kubelet[4331]: E1213 00:28:50.976288    4331 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-q7v45" podUID="1579f5c9-d574-4ab8-9add-e89621b9c203"
	
	* 
	* ==> storage-provisioner [20d184eec9f33c68b1c58d5ac3d3b42daefb6d790a40ececc0a0a6d73d8bc2e9] <==
	* I1213 00:14:37.692054       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 00:14:37.715881       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 00:14:37.716142       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 00:14:37.734671       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 00:14:37.736468       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-143586_ae84fa56-b1aa-4927-81cb-7ec9d3faeeb5!
	I1213 00:14:37.737122       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"198e3ab0-405c-4add-9058-2aa3fd8d2473", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-143586_ae84fa56-b1aa-4927-81cb-7ec9d3faeeb5 became leader
	I1213 00:14:37.837619       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-143586_ae84fa56-b1aa-4927-81cb-7ec9d3faeeb5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-143586 -n no-preload-143586
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-143586 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-q7v45
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-143586 describe pod metrics-server-57f55c9bc5-q7v45
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-143586 describe pod metrics-server-57f55c9bc5-q7v45: exit status 1 (72.650494ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-q7v45" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-143586 describe pod metrics-server-57f55c9bc5-q7v45: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (312.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (179.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1213 00:27:45.320496  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-508612 -n old-k8s-version-508612
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-13 00:28:48.565203742 +0000 UTC m=+5652.121341276
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-508612 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-508612 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.375µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-508612 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-508612 -n old-k8s-version-508612
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-508612 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-508612 logs -n 25: (1.684906422s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-380248                              | cert-expiration-380248       | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p pause-042245                                        | pause-042245                 | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 12 Dec 23 23:58 UTC |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 12 Dec 23 23:58 UTC | 13 Dec 23 00:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-884273                              | stopped-upgrade-884273       | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-343019 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:00 UTC |
	|         | disable-driver-mounts-343019                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:00 UTC | 13 Dec 23 00:01 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-508612        | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-335807            | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143586             | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC | 13 Dec 23 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-743278  | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC | 13 Dec 23 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:02 UTC |                     |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-508612             | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-508612                              | old-k8s-version-508612       | jenkins | v1.32.0 | 13 Dec 23 00:03 UTC | 13 Dec 23 00:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-335807                 | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-335807                                  | embed-certs-335807           | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143586                  | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143586                                   | no-preload-143586            | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-743278       | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-743278 | jenkins | v1.32.0 | 13 Dec 23 00:04 UTC | 13 Dec 23 00:14 UTC |
	|         | default-k8s-diff-port-743278                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/13 00:04:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 00:04:40.034430  177409 out.go:296] Setting OutFile to fd 1 ...
	I1213 00:04:40.034592  177409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:04:40.034601  177409 out.go:309] Setting ErrFile to fd 2...
	I1213 00:04:40.034606  177409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1213 00:04:40.034805  177409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1213 00:04:40.035357  177409 out.go:303] Setting JSON to false
	I1213 00:04:40.036280  177409 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10028,"bootTime":1702415852,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 00:04:40.036342  177409 start.go:138] virtualization: kvm guest
	I1213 00:04:40.038707  177409 out.go:177] * [default-k8s-diff-port-743278] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1213 00:04:40.040139  177409 out.go:177]   - MINIKUBE_LOCATION=17777
	I1213 00:04:40.040129  177409 notify.go:220] Checking for updates...
	I1213 00:04:40.041788  177409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 00:04:40.043246  177409 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:04:40.044627  177409 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1213 00:04:40.046091  177409 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 00:04:40.047562  177409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 00:04:40.049427  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:04:40.049930  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:04:40.049979  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:04:40.064447  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44247
	I1213 00:04:40.064825  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:04:40.065333  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:04:40.065352  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:04:40.065686  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:04:40.065850  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:04:40.066092  177409 driver.go:392] Setting default libvirt URI to qemu:///system
	I1213 00:04:40.066357  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:04:40.066389  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:04:40.080217  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38499
	I1213 00:04:40.080643  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:04:40.081072  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:04:40.081098  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:04:40.081436  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:04:40.081622  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:04:40.114108  177409 out.go:177] * Using the kvm2 driver based on existing profile
	I1213 00:04:40.115585  177409 start.go:298] selected driver: kvm2
	I1213 00:04:40.115603  177409 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:04:40.115714  177409 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 00:04:40.116379  177409 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:04:40.116485  177409 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1213 00:04:40.131964  177409 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1213 00:04:40.132324  177409 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 00:04:40.132392  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:04:40.132405  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:04:40.132416  177409 start_flags.go:323] config:
	{Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-74327
8 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:04:40.132599  177409 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 00:04:40.135330  177409 out.go:177] * Starting control plane node default-k8s-diff-port-743278 in cluster default-k8s-diff-port-743278
	I1213 00:04:36.772718  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:39.844694  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:40.136912  177409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:04:40.136959  177409 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1213 00:04:40.136972  177409 cache.go:56] Caching tarball of preloaded images
	I1213 00:04:40.137094  177409 preload.go:174] Found /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 00:04:40.137108  177409 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1213 00:04:40.137215  177409 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/config.json ...
	I1213 00:04:40.137413  177409 start.go:365] acquiring machines lock for default-k8s-diff-port-743278: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:04:45.924700  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:48.996768  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:55.076732  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:04:58.148779  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:04.228721  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:07.300700  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:13.380743  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:16.452690  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:22.532695  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:25.604771  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:31.684681  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:34.756720  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:40.836697  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:43.908711  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:49.988729  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:53.060691  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:05:59.140737  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:02.212709  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:08.292717  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:11.364746  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:17.444722  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:20.516796  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:26.596650  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:29.668701  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:35.748723  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:38.820688  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:44.900719  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:47.972683  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:54.052708  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:06:57.124684  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:03.204728  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:06.276720  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:12.356681  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:15.428743  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:21.508696  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:24.580749  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:30.660747  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:33.732746  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:39.812738  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:42.884767  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:48.964744  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:52.036691  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:07:58.116726  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:01.188638  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:07.268756  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:10.340725  176813 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1213 00:08:13.345031  177122 start.go:369] acquired machines lock for "embed-certs-335807" in 4m2.39512191s
	I1213 00:08:13.345120  177122 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:08:13.345129  177122 fix.go:54] fixHost starting: 
	I1213 00:08:13.345524  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:08:13.345564  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:08:13.360333  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I1213 00:08:13.360832  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:08:13.361366  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:08:13.361390  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:08:13.361769  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:08:13.361941  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:13.362104  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:08:13.363919  177122 fix.go:102] recreateIfNeeded on embed-certs-335807: state=Stopped err=<nil>
	I1213 00:08:13.363938  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	W1213 00:08:13.364125  177122 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:08:13.366077  177122 out.go:177] * Restarting existing kvm2 VM for "embed-certs-335807" ...
	I1213 00:08:13.342763  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:08:13.342804  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:08:13.344878  176813 machine.go:91] provisioned docker machine in 4m37.409041046s
	I1213 00:08:13.344942  176813 fix.go:56] fixHost completed within 4m37.430106775s
	I1213 00:08:13.344949  176813 start.go:83] releasing machines lock for "old-k8s-version-508612", held for 4m37.430132032s
	W1213 00:08:13.344965  176813 start.go:694] error starting host: provision: host is not running
	W1213 00:08:13.345107  176813 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1213 00:08:13.345120  176813 start.go:709] Will try again in 5 seconds ...
	I1213 00:08:13.367310  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Start
	I1213 00:08:13.367451  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring networks are active...
	I1213 00:08:13.368551  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring network default is active
	I1213 00:08:13.368936  177122 main.go:141] libmachine: (embed-certs-335807) Ensuring network mk-embed-certs-335807 is active
	I1213 00:08:13.369290  177122 main.go:141] libmachine: (embed-certs-335807) Getting domain xml...
	I1213 00:08:13.369993  177122 main.go:141] libmachine: (embed-certs-335807) Creating domain...
	I1213 00:08:14.617766  177122 main.go:141] libmachine: (embed-certs-335807) Waiting to get IP...
	I1213 00:08:14.618837  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:14.619186  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:14.619322  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:14.619202  177987 retry.go:31] will retry after 226.757968ms: waiting for machine to come up
	I1213 00:08:14.847619  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:14.847962  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:14.847996  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:14.847892  177987 retry.go:31] will retry after 390.063287ms: waiting for machine to come up
	I1213 00:08:15.239515  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:15.239906  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:15.239939  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:15.239845  177987 retry.go:31] will retry after 341.644988ms: waiting for machine to come up
	I1213 00:08:15.583408  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:15.583848  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:15.583878  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:15.583796  177987 retry.go:31] will retry after 420.722896ms: waiting for machine to come up
	I1213 00:08:18.346616  176813 start.go:365] acquiring machines lock for old-k8s-version-508612: {Name:mk8c11045b61cb775530e0163603700760b5602d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 00:08:16.006364  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:16.006767  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:16.006803  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:16.006713  177987 retry.go:31] will retry after 548.041925ms: waiting for machine to come up
	I1213 00:08:16.556444  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:16.556880  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:16.556912  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:16.556833  177987 retry.go:31] will retry after 862.959808ms: waiting for machine to come up
	I1213 00:08:17.421147  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:17.421596  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:17.421630  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:17.421544  177987 retry.go:31] will retry after 1.085782098s: waiting for machine to come up
	I1213 00:08:18.509145  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:18.509595  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:18.509619  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:18.509556  177987 retry.go:31] will retry after 1.303432656s: waiting for machine to come up
	I1213 00:08:19.814985  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:19.815430  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:19.815473  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:19.815367  177987 retry.go:31] will retry after 1.337474429s: waiting for machine to come up
	I1213 00:08:21.154792  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:21.155213  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:21.155236  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:21.155165  177987 retry.go:31] will retry after 2.104406206s: waiting for machine to come up
	I1213 00:08:23.262615  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:23.263144  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:23.263174  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:23.263066  177987 retry.go:31] will retry after 2.064696044s: waiting for machine to come up
	I1213 00:08:25.330105  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:25.330586  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:25.330621  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:25.330544  177987 retry.go:31] will retry after 2.270537288s: waiting for machine to come up
	I1213 00:08:27.602267  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:27.602787  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:27.602810  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:27.602758  177987 retry.go:31] will retry after 3.020844169s: waiting for machine to come up
	I1213 00:08:30.626232  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:30.626696  177122 main.go:141] libmachine: (embed-certs-335807) DBG | unable to find current IP address of domain embed-certs-335807 in network mk-embed-certs-335807
	I1213 00:08:30.626731  177122 main.go:141] libmachine: (embed-certs-335807) DBG | I1213 00:08:30.626645  177987 retry.go:31] will retry after 5.329279261s: waiting for machine to come up
	I1213 00:08:37.405257  177307 start.go:369] acquired machines lock for "no-preload-143586" in 4m8.02482326s
	I1213 00:08:37.405329  177307 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:08:37.405340  177307 fix.go:54] fixHost starting: 
	I1213 00:08:37.405777  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:08:37.405830  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:08:37.422055  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41433
	I1213 00:08:37.422558  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:08:37.423112  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:08:37.423143  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:08:37.423462  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:08:37.423650  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:08:37.423795  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:08:37.425302  177307 fix.go:102] recreateIfNeeded on no-preload-143586: state=Stopped err=<nil>
	I1213 00:08:37.425345  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	W1213 00:08:37.425519  177307 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:08:37.428723  177307 out.go:177] * Restarting existing kvm2 VM for "no-preload-143586" ...
	I1213 00:08:35.958579  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.959166  177122 main.go:141] libmachine: (embed-certs-335807) Found IP for machine: 192.168.61.249
	I1213 00:08:35.959200  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has current primary IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.959212  177122 main.go:141] libmachine: (embed-certs-335807) Reserving static IP address...
	I1213 00:08:35.959676  177122 main.go:141] libmachine: (embed-certs-335807) Reserved static IP address: 192.168.61.249
	I1213 00:08:35.959731  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "embed-certs-335807", mac: "52:54:00:20:1b:c0", ip: "192.168.61.249"} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:35.959746  177122 main.go:141] libmachine: (embed-certs-335807) Waiting for SSH to be available...
	I1213 00:08:35.959779  177122 main.go:141] libmachine: (embed-certs-335807) DBG | skip adding static IP to network mk-embed-certs-335807 - found existing host DHCP lease matching {name: "embed-certs-335807", mac: "52:54:00:20:1b:c0", ip: "192.168.61.249"}
	I1213 00:08:35.959795  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Getting to WaitForSSH function...
	I1213 00:08:35.962033  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.962419  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:35.962448  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:35.962552  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Using SSH client type: external
	I1213 00:08:35.962575  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa (-rw-------)
	I1213 00:08:35.962608  177122 main.go:141] libmachine: (embed-certs-335807) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:08:35.962624  177122 main.go:141] libmachine: (embed-certs-335807) DBG | About to run SSH command:
	I1213 00:08:35.962637  177122 main.go:141] libmachine: (embed-certs-335807) DBG | exit 0
	I1213 00:08:36.056268  177122 main.go:141] libmachine: (embed-certs-335807) DBG | SSH cmd err, output: <nil>: 
	I1213 00:08:36.056649  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetConfigRaw
	I1213 00:08:36.057283  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:36.060244  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.060656  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.060705  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.060930  177122 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/config.json ...
	I1213 00:08:36.061132  177122 machine.go:88] provisioning docker machine ...
	I1213 00:08:36.061150  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:36.061386  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.061569  177122 buildroot.go:166] provisioning hostname "embed-certs-335807"
	I1213 00:08:36.061593  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.061737  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.063997  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.064352  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.064374  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.064532  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.064743  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.064899  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.065039  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.065186  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.065556  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.065575  177122 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-335807 && echo "embed-certs-335807" | sudo tee /etc/hostname
	I1213 00:08:36.199697  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-335807
	
	I1213 00:08:36.199733  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.202879  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.203289  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.203312  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.203495  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.203705  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.203845  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.203968  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.204141  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.204545  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.204564  177122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-335807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-335807/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-335807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:08:36.336285  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:08:36.336315  177122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:08:36.336337  177122 buildroot.go:174] setting up certificates
	I1213 00:08:36.336350  177122 provision.go:83] configureAuth start
	I1213 00:08:36.336364  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetMachineName
	I1213 00:08:36.336658  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:36.339327  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.339695  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.339727  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.339861  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.342106  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.342485  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.342506  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.342613  177122 provision.go:138] copyHostCerts
	I1213 00:08:36.342699  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:08:36.342711  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:08:36.342795  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:08:36.342910  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:08:36.342928  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:08:36.342962  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:08:36.343051  177122 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:08:36.343061  177122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:08:36.343099  177122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:08:36.343185  177122 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.embed-certs-335807 san=[192.168.61.249 192.168.61.249 localhost 127.0.0.1 minikube embed-certs-335807]
	I1213 00:08:36.680595  177122 provision.go:172] copyRemoteCerts
	I1213 00:08:36.680687  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:08:36.680715  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.683411  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.683664  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.683690  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.683826  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.684044  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.684217  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.684370  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:36.773978  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:08:36.795530  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1213 00:08:36.817104  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 00:08:36.838510  177122 provision.go:86] duration metric: configureAuth took 502.141764ms
	I1213 00:08:36.838544  177122 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:08:36.838741  177122 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:08:36.838818  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:36.841372  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.841720  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:36.841759  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:36.841875  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:36.842095  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.842276  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:36.842447  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:36.842593  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:36.843043  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:36.843069  177122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:08:37.150317  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:08:37.150364  177122 machine.go:91] provisioned docker machine in 1.089215763s
	I1213 00:08:37.150378  177122 start.go:300] post-start starting for "embed-certs-335807" (driver="kvm2")
	I1213 00:08:37.150391  177122 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:08:37.150424  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.150800  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:08:37.150829  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.153552  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.153920  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.153958  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.154075  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.154268  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.154406  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.154562  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.245839  177122 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:08:37.249929  177122 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:08:37.249959  177122 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:08:37.250029  177122 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:08:37.250114  177122 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:08:37.250202  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:08:37.258062  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:08:37.280034  177122 start.go:303] post-start completed in 129.642247ms
	I1213 00:08:37.280060  177122 fix.go:56] fixHost completed within 23.934930358s
	I1213 00:08:37.280085  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.282572  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.282861  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.282903  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.283059  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.283333  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.283516  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.283694  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.283898  177122 main.go:141] libmachine: Using SSH client type: native
	I1213 00:08:37.284217  177122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.249 22 <nil> <nil>}
	I1213 00:08:37.284229  177122 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:08:37.405050  177122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426117.378231894
	
	I1213 00:08:37.405077  177122 fix.go:206] guest clock: 1702426117.378231894
	I1213 00:08:37.405099  177122 fix.go:219] Guest: 2023-12-13 00:08:37.378231894 +0000 UTC Remote: 2023-12-13 00:08:37.280064166 +0000 UTC m=+266.483341520 (delta=98.167728ms)
	I1213 00:08:37.405127  177122 fix.go:190] guest clock delta is within tolerance: 98.167728ms
	I1213 00:08:37.405137  177122 start.go:83] releasing machines lock for "embed-certs-335807", held for 24.060057368s
	I1213 00:08:37.405161  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.405417  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:37.408128  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.408513  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.408559  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.408681  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409264  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409449  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:08:37.409542  177122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:08:37.409611  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.409647  177122 ssh_runner.go:195] Run: cat /version.json
	I1213 00:08:37.409673  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:08:37.412390  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412733  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.412764  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412795  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.412910  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.413104  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.413187  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:37.413224  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:37.413263  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.413462  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:08:37.413455  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.413633  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:08:37.413758  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:08:37.413899  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:08:37.531948  177122 ssh_runner.go:195] Run: systemctl --version
	I1213 00:08:37.537555  177122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:08:37.677429  177122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:08:37.684043  177122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:08:37.684115  177122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:08:37.702304  177122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:08:37.702327  177122 start.go:475] detecting cgroup driver to use...
	I1213 00:08:37.702388  177122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:08:37.716601  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:08:37.728516  177122 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:08:37.728571  177122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:08:37.740595  177122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:08:37.753166  177122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:08:37.853095  177122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:08:37.970696  177122 docker.go:219] disabling docker service ...
	I1213 00:08:37.970769  177122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:08:37.983625  177122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:08:37.994924  177122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:08:38.110057  177122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:08:38.229587  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:08:38.243052  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:08:38.260480  177122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:08:38.260547  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.269442  177122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:08:38.269508  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.278569  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.287680  177122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:08:38.296798  177122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:08:38.306247  177122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:08:38.314189  177122 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:08:38.314251  177122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:08:38.326702  177122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:08:38.335111  177122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:08:38.435024  177122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:08:38.600232  177122 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:08:38.600322  177122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:08:38.606384  177122 start.go:543] Will wait 60s for crictl version
	I1213 00:08:38.606446  177122 ssh_runner.go:195] Run: which crictl
	I1213 00:08:38.611180  177122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:08:38.654091  177122 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:08:38.654197  177122 ssh_runner.go:195] Run: crio --version
	I1213 00:08:38.705615  177122 ssh_runner.go:195] Run: crio --version
	I1213 00:08:38.755387  177122 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1213 00:08:37.430037  177307 main.go:141] libmachine: (no-preload-143586) Calling .Start
	I1213 00:08:37.430266  177307 main.go:141] libmachine: (no-preload-143586) Ensuring networks are active...
	I1213 00:08:37.430931  177307 main.go:141] libmachine: (no-preload-143586) Ensuring network default is active
	I1213 00:08:37.431290  177307 main.go:141] libmachine: (no-preload-143586) Ensuring network mk-no-preload-143586 is active
	I1213 00:08:37.431640  177307 main.go:141] libmachine: (no-preload-143586) Getting domain xml...
	I1213 00:08:37.432281  177307 main.go:141] libmachine: (no-preload-143586) Creating domain...
	I1213 00:08:38.686491  177307 main.go:141] libmachine: (no-preload-143586) Waiting to get IP...
	I1213 00:08:38.687472  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:38.688010  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:38.688095  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:38.687986  178111 retry.go:31] will retry after 246.453996ms: waiting for machine to come up
	I1213 00:08:38.936453  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:38.936931  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:38.936963  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:38.936879  178111 retry.go:31] will retry after 317.431088ms: waiting for machine to come up
	I1213 00:08:39.256641  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:39.257217  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:39.257241  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:39.257165  178111 retry.go:31] will retry after 379.635912ms: waiting for machine to come up
	I1213 00:08:38.757019  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetIP
	I1213 00:08:38.760125  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:38.760684  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:08:38.760720  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:08:38.760949  177122 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1213 00:08:38.765450  177122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:08:38.778459  177122 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:08:38.778539  177122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:08:38.819215  177122 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1213 00:08:38.819281  177122 ssh_runner.go:195] Run: which lz4
	I1213 00:08:38.823481  177122 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:08:38.829034  177122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:08:38.829069  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1213 00:08:40.721922  177122 crio.go:444] Took 1.898469 seconds to copy over tarball
	I1213 00:08:40.721984  177122 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:08:39.638611  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:39.639108  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:39.639137  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:39.639067  178111 retry.go:31] will retry after 596.16391ms: waiting for machine to come up
	I1213 00:08:40.237504  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:40.237957  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:40.237990  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:40.237911  178111 retry.go:31] will retry after 761.995315ms: waiting for machine to come up
	I1213 00:08:41.002003  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:41.002388  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:41.002413  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:41.002329  178111 retry.go:31] will retry after 693.578882ms: waiting for machine to come up
	I1213 00:08:41.697126  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:41.697617  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:41.697652  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:41.697555  178111 retry.go:31] will retry after 1.050437275s: waiting for machine to come up
	I1213 00:08:42.749227  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:42.749833  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:42.749866  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:42.749782  178111 retry.go:31] will retry after 1.175916736s: waiting for machine to come up
	I1213 00:08:43.927564  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:43.928115  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:43.928144  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:43.928065  178111 retry.go:31] will retry after 1.590924957s: waiting for machine to come up
	I1213 00:08:43.767138  177122 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.045121634s)
	I1213 00:08:43.767169  177122 crio.go:451] Took 3.045224 seconds to extract the tarball
	I1213 00:08:43.767178  177122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:08:43.809047  177122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:08:43.873704  177122 crio.go:496] all images are preloaded for cri-o runtime.
	I1213 00:08:43.873726  177122 cache_images.go:84] Images are preloaded, skipping loading
	I1213 00:08:43.873792  177122 ssh_runner.go:195] Run: crio config
	I1213 00:08:43.941716  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:08:43.941747  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:08:43.941774  177122 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:08:43.941800  177122 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.249 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-335807 NodeName:embed-certs-335807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:08:43.942026  177122 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-335807"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:08:43.942123  177122 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-335807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-335807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:08:43.942201  177122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1213 00:08:43.951461  177122 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:08:43.951550  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:08:43.960491  177122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1213 00:08:43.976763  177122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:08:43.993725  177122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1213 00:08:44.010795  177122 ssh_runner.go:195] Run: grep 192.168.61.249	control-plane.minikube.internal$ /etc/hosts
	I1213 00:08:44.014668  177122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.249	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:08:44.027339  177122 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807 for IP: 192.168.61.249
	I1213 00:08:44.027376  177122 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:08:44.027550  177122 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:08:44.027617  177122 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:08:44.027701  177122 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/client.key
	I1213 00:08:44.027786  177122 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.key.ba34ddd8
	I1213 00:08:44.027844  177122 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.key
	I1213 00:08:44.027987  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:08:44.028035  177122 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:08:44.028056  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:08:44.028088  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:08:44.028129  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:08:44.028158  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:08:44.028220  177122 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:08:44.029033  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:08:44.054023  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 00:08:44.078293  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:08:44.102083  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/embed-certs-335807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 00:08:44.126293  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:08:44.149409  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:08:44.172887  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:08:44.195662  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:08:44.218979  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:08:44.241598  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:08:44.265251  177122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:08:44.290073  177122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:08:44.306685  177122 ssh_runner.go:195] Run: openssl version
	I1213 00:08:44.312422  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:08:44.322405  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.327215  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.327296  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:08:44.333427  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:08:44.343574  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:08:44.353981  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.358997  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.359051  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:08:44.364654  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:08:44.375147  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:08:44.384900  177122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.389492  177122 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.389553  177122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:08:44.395105  177122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:08:44.404656  177122 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:08:44.409852  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:08:44.415755  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:08:44.421911  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:08:44.428119  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:08:44.435646  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:08:44.441692  177122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:08:44.447849  177122 kubeadm.go:404] StartCluster: {Name:embed-certs-335807 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-335807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.249 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:08:44.447976  177122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:08:44.448025  177122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:08:44.495646  177122 cri.go:89] found id: ""
	I1213 00:08:44.495744  177122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:08:44.506405  177122 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:08:44.506454  177122 kubeadm.go:636] restartCluster start
	I1213 00:08:44.506515  177122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:08:44.516110  177122 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:44.517275  177122 kubeconfig.go:92] found "embed-certs-335807" server: "https://192.168.61.249:8443"
	I1213 00:08:44.519840  177122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:08:44.529214  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:44.529294  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:44.540415  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:44.540447  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:44.540497  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:44.552090  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.052810  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:45.052890  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:45.066300  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.552897  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:45.553031  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:45.564969  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:45.520191  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:45.520729  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:45.520754  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:45.520662  178111 retry.go:31] will retry after 1.407916355s: waiting for machine to come up
	I1213 00:08:46.930655  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:46.931073  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:46.931138  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:46.930993  178111 retry.go:31] will retry after 2.033169427s: waiting for machine to come up
	I1213 00:08:48.966888  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:48.967318  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:48.967351  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:48.967253  178111 retry.go:31] will retry after 2.277791781s: waiting for machine to come up
	I1213 00:08:46.052915  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:46.053025  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:46.068633  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:46.552208  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:46.552317  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:46.565045  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:47.052533  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:47.052627  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:47.068457  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:47.553040  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:47.553127  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:47.564657  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:48.052228  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:48.052322  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:48.068950  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:48.553171  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:48.553256  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:48.568868  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:49.052389  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:49.052515  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:49.064674  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:49.552894  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:49.553012  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:49.564302  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:50.052843  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:50.052941  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:50.064617  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:50.553231  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:50.553316  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:50.567944  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:51.247665  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:51.248141  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:51.248175  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:51.248098  178111 retry.go:31] will retry after 4.234068925s: waiting for machine to come up
	I1213 00:08:51.052574  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:51.052700  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:51.069491  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:51.553152  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:51.553234  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:51.565331  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:52.052984  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:52.053064  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:52.064748  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:52.552257  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:52.552362  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:52.563626  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:53.053196  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:53.053287  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:53.064273  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:53.552319  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:53.552423  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:53.563587  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:54.053227  177122 api_server.go:166] Checking apiserver status ...
	I1213 00:08:54.053331  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:08:54.065636  177122 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:08:54.530249  177122 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:08:54.530301  177122 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:08:54.530330  177122 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:08:54.530424  177122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:08:54.570200  177122 cri.go:89] found id: ""
	I1213 00:08:54.570275  177122 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:08:54.586722  177122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:08:54.596240  177122 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:08:54.596313  177122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:08:54.605202  177122 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:08:54.605226  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:54.718619  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:55.483563  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:08:55.483985  177307 main.go:141] libmachine: (no-preload-143586) DBG | unable to find current IP address of domain no-preload-143586 in network mk-no-preload-143586
	I1213 00:08:55.484024  177307 main.go:141] libmachine: (no-preload-143586) DBG | I1213 00:08:55.483927  178111 retry.go:31] will retry after 5.446962632s: waiting for machine to come up
	I1213 00:08:55.944250  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.225592219s)
	I1213 00:08:55.944282  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.132294  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.214859  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:08:56.297313  177122 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:08:56.297421  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:56.315946  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:56.830228  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:57.329695  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:57.830336  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.329610  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.829933  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:08:58.853978  177122 api_server.go:72] duration metric: took 2.556667404s to wait for apiserver process to appear ...
	I1213 00:08:58.854013  177122 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:08:58.854054  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:02.161624  177409 start.go:369] acquired machines lock for "default-k8s-diff-port-743278" in 4m22.024178516s
	I1213 00:09:02.161693  177409 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:09:02.161704  177409 fix.go:54] fixHost starting: 
	I1213 00:09:02.162127  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:02.162174  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:02.179045  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41749
	I1213 00:09:02.179554  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:02.180099  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:02.180131  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:02.180461  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:02.180658  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:02.180795  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:02.182459  177409 fix.go:102] recreateIfNeeded on default-k8s-diff-port-743278: state=Stopped err=<nil>
	I1213 00:09:02.182498  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	W1213 00:09:02.182657  177409 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:09:02.184934  177409 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-743278" ...
	I1213 00:09:00.933522  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.934020  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has current primary IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.934046  177307 main.go:141] libmachine: (no-preload-143586) Found IP for machine: 192.168.50.181
	I1213 00:09:00.934058  177307 main.go:141] libmachine: (no-preload-143586) Reserving static IP address...
	I1213 00:09:00.934538  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "no-preload-143586", mac: "52:54:00:4d:da:7b", ip: "192.168.50.181"} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:00.934573  177307 main.go:141] libmachine: (no-preload-143586) DBG | skip adding static IP to network mk-no-preload-143586 - found existing host DHCP lease matching {name: "no-preload-143586", mac: "52:54:00:4d:da:7b", ip: "192.168.50.181"}
	I1213 00:09:00.934592  177307 main.go:141] libmachine: (no-preload-143586) Reserved static IP address: 192.168.50.181
	I1213 00:09:00.934601  177307 main.go:141] libmachine: (no-preload-143586) Waiting for SSH to be available...
	I1213 00:09:00.934610  177307 main.go:141] libmachine: (no-preload-143586) DBG | Getting to WaitForSSH function...
	I1213 00:09:00.936830  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.937236  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:00.937283  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:00.937399  177307 main.go:141] libmachine: (no-preload-143586) DBG | Using SSH client type: external
	I1213 00:09:00.937421  177307 main.go:141] libmachine: (no-preload-143586) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa (-rw-------)
	I1213 00:09:00.937458  177307 main.go:141] libmachine: (no-preload-143586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:00.937473  177307 main.go:141] libmachine: (no-preload-143586) DBG | About to run SSH command:
	I1213 00:09:00.937485  177307 main.go:141] libmachine: (no-preload-143586) DBG | exit 0
	I1213 00:09:01.024658  177307 main.go:141] libmachine: (no-preload-143586) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:01.024996  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetConfigRaw
	I1213 00:09:01.025611  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:01.028062  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.028471  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.028509  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.028734  177307 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/config.json ...
	I1213 00:09:01.028955  177307 machine.go:88] provisioning docker machine ...
	I1213 00:09:01.028980  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:01.029193  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.029394  177307 buildroot.go:166] provisioning hostname "no-preload-143586"
	I1213 00:09:01.029409  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.029580  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.031949  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.032273  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.032305  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.032413  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.032599  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.032758  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.032877  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.033036  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.033377  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.033395  177307 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-143586 && echo "no-preload-143586" | sudo tee /etc/hostname
	I1213 00:09:01.157420  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-143586
	
	I1213 00:09:01.157461  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.160181  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.160498  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.160535  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.160654  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.160915  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.161104  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.161299  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.161469  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.161785  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.161811  177307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-143586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-143586/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-143586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:01.287746  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:01.287776  177307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:01.287835  177307 buildroot.go:174] setting up certificates
	I1213 00:09:01.287844  177307 provision.go:83] configureAuth start
	I1213 00:09:01.287857  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetMachineName
	I1213 00:09:01.288156  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:01.290754  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.291147  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.291179  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.291296  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.293643  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.294002  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.294034  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.294184  177307 provision.go:138] copyHostCerts
	I1213 00:09:01.294243  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:01.294256  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:01.294323  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:01.294441  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:01.294453  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:01.294489  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:01.294569  177307 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:01.294578  177307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:01.294610  177307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:01.294683  177307 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.no-preload-143586 san=[192.168.50.181 192.168.50.181 localhost 127.0.0.1 minikube no-preload-143586]
	I1213 00:09:01.407742  177307 provision.go:172] copyRemoteCerts
	I1213 00:09:01.407823  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:01.407856  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.410836  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.411141  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.411170  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.411455  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.411698  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.411883  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.412056  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:01.501782  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:01.530009  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:01.555147  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1213 00:09:01.580479  177307 provision.go:86] duration metric: configureAuth took 292.598329ms
	I1213 00:09:01.580511  177307 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:01.580732  177307 config.go:182] Loaded profile config "no-preload-143586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:09:01.580835  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.583742  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.584241  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.584274  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.584581  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.584798  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.585004  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.585184  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.585429  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:01.585889  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:01.585928  177307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:01.909801  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:01.909855  177307 machine.go:91] provisioned docker machine in 880.876025ms
	I1213 00:09:01.909868  177307 start.go:300] post-start starting for "no-preload-143586" (driver="kvm2")
	I1213 00:09:01.909883  177307 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:01.909905  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:01.910311  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:01.910349  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:01.913247  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.913635  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:01.913669  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:01.913824  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:01.914044  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:01.914199  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:01.914349  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.005986  177307 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:02.011294  177307 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:02.011323  177307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:02.011403  177307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:02.011494  177307 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:02.011601  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:02.022942  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:02.044535  177307 start.go:303] post-start completed in 134.650228ms
	I1213 00:09:02.044569  177307 fix.go:56] fixHost completed within 24.639227496s
	I1213 00:09:02.044597  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.047115  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.047543  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.047573  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.047758  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.047986  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.048161  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.048340  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.048500  177307 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:02.048803  177307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1213 00:09:02.048816  177307 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:02.161458  177307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426142.108795362
	
	I1213 00:09:02.161485  177307 fix.go:206] guest clock: 1702426142.108795362
	I1213 00:09:02.161496  177307 fix.go:219] Guest: 2023-12-13 00:09:02.108795362 +0000 UTC Remote: 2023-12-13 00:09:02.044573107 +0000 UTC m=+272.815740988 (delta=64.222255ms)
	I1213 00:09:02.161522  177307 fix.go:190] guest clock delta is within tolerance: 64.222255ms
	I1213 00:09:02.161529  177307 start.go:83] releasing machines lock for "no-preload-143586", held for 24.756225075s
	I1213 00:09:02.161560  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.161847  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:02.164980  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.165383  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.165406  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.165582  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166273  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166493  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:09:02.166576  177307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:02.166621  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.166903  177307 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:02.166931  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:09:02.169526  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.169553  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.169895  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.169938  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:02.169978  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.170000  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:02.170183  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.170282  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:09:02.170344  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.170473  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.170480  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:09:02.170603  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.170653  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:09:02.170804  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:09:02.281372  177307 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:02.288798  177307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:02.432746  177307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:02.441453  177307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:02.441539  177307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:02.456484  177307 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:02.456512  177307 start.go:475] detecting cgroup driver to use...
	I1213 00:09:02.456578  177307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:02.473267  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:02.485137  177307 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:02.485226  177307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:02.497728  177307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:02.510592  177307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:02.657681  177307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:02.791382  177307 docker.go:219] disabling docker service ...
	I1213 00:09:02.791476  177307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:02.804977  177307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:02.817203  177307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:02.927181  177307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:03.037010  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:03.050235  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:03.068944  177307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:09:03.069048  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.078813  177307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:03.078975  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.089064  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.098790  177307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:03.109679  177307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:03.120686  177307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:03.128767  177307 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:03.128820  177307 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:03.141210  177307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:03.149602  177307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:03.254618  177307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:03.434005  177307 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:03.434097  177307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:03.440391  177307 start.go:543] Will wait 60s for crictl version
	I1213 00:09:03.440481  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:03.445570  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:03.492155  177307 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:03.492240  177307 ssh_runner.go:195] Run: crio --version
	I1213 00:09:03.549854  177307 ssh_runner.go:195] Run: crio --version
	I1213 00:09:03.605472  177307 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1213 00:09:03.606678  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetIP
	I1213 00:09:03.610326  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:03.610753  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:09:03.610789  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:09:03.611019  177307 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:03.616608  177307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:03.632258  177307 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1213 00:09:03.632317  177307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:03.672640  177307 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1213 00:09:03.672666  177307 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 00:09:03.672723  177307 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:03.672772  177307 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.672774  177307 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.672820  177307 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.673002  177307 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1213 00:09:03.673032  177307 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.673038  177307 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.673094  177307 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.674386  177307 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.674433  177307 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1213 00:09:03.674505  177307 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:03.674648  177307 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.674774  177307 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.674822  177307 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.674864  177307 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.675103  177307 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.808980  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.812271  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1213 00:09:03.827742  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:03.828695  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:03.831300  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:03.846041  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:03.850598  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:03.908323  177307 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1213 00:09:03.908378  177307 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:03.908458  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.122878  177307 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1213 00:09:04.122930  177307 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:04.122955  177307 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1213 00:09:04.123115  177307 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:04.123132  177307 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1213 00:09:04.123164  177307 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:04.122988  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123203  177307 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1213 00:09:04.123230  177307 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:04.123245  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1213 00:09:04.123267  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123065  177307 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1213 00:09:04.123304  177307 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:04.123311  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123338  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.123201  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:04.135289  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1213 00:09:04.139046  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1213 00:09:04.206020  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1213 00:09:04.206025  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.206195  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.206291  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1213 00:09:04.206422  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1213 00:09:04.247875  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1213 00:09:04.248003  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:04.248126  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1213 00:09:04.248193  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:02.719708  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:02.719761  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:02.719779  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:02.780571  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:02.780621  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:03.281221  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:03.290375  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:03.290413  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:03.781510  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:03.788285  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:03.788314  177122 api_server.go:103] status: https://192.168.61.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:04.280872  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:09:04.288043  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 200:
	ok
	I1213 00:09:04.299772  177122 api_server.go:141] control plane version: v1.28.4
	I1213 00:09:04.299808  177122 api_server.go:131] duration metric: took 5.445787793s to wait for apiserver health ...
	I1213 00:09:04.299821  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:09:04.299830  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:04.301759  177122 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:02.186420  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Start
	I1213 00:09:02.186584  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring networks are active...
	I1213 00:09:02.187464  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring network default is active
	I1213 00:09:02.187836  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Ensuring network mk-default-k8s-diff-port-743278 is active
	I1213 00:09:02.188238  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Getting domain xml...
	I1213 00:09:02.188979  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Creating domain...
	I1213 00:09:03.516121  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting to get IP...
	I1213 00:09:03.517461  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.518001  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.518058  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:03.517966  178294 retry.go:31] will retry after 198.440266ms: waiting for machine to come up
	I1213 00:09:03.718554  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.718808  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:03.718846  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:03.718804  178294 retry.go:31] will retry after 319.889216ms: waiting for machine to come up
	I1213 00:09:04.040334  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.040806  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.040956  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:04.040869  178294 retry.go:31] will retry after 465.804275ms: waiting for machine to come up
	I1213 00:09:04.508751  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.509133  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:04.509237  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:04.509181  178294 retry.go:31] will retry after 609.293222ms: waiting for machine to come up
	I1213 00:09:04.303497  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:04.332773  177122 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:04.373266  177122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:04.384737  177122 system_pods.go:59] 9 kube-system pods found
	I1213 00:09:04.384791  177122 system_pods.go:61] "coredns-5dd5756b68-5vm25" [83fb4b19-82a2-42eb-b4df-6fd838fb8848] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:04.384805  177122 system_pods.go:61] "coredns-5dd5756b68-6mfmr" [e9598d8f-e497-4725-8eca-7fe0e7c2c2f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:04.384820  177122 system_pods.go:61] "etcd-embed-certs-335807" [cf066481-3312-4fce-8e29-e00a0177f188] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:04.384833  177122 system_pods.go:61] "kube-apiserver-embed-certs-335807" [0a545be1-8bb8-425a-889e-5ee1293e0bdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:04.384847  177122 system_pods.go:61] "kube-controller-manager-embed-certs-335807" [fd7ec770-5008-46f9-9f41-122e56baf2e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:04.384862  177122 system_pods.go:61] "kube-proxy-k8n7r" [df8cefdc-7c91-40e6-8949-ba413fd75b28] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:04.384874  177122 system_pods.go:61] "kube-scheduler-embed-certs-335807" [d2431157-640c-49e6-a83d-37cac6be1c50] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:04.384883  177122 system_pods.go:61] "metrics-server-57f55c9bc5-fx5pd" [8aa6fc5a-5649-47b2-a7de-3cabfd1515a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:04.384899  177122 system_pods.go:61] "storage-provisioner" [02026bc0-4e03-4747-ad77-052f2911efe1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:04.384909  177122 system_pods.go:74] duration metric: took 11.614377ms to wait for pod list to return data ...
	I1213 00:09:04.384928  177122 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:04.389533  177122 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:04.389578  177122 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:04.389594  177122 node_conditions.go:105] duration metric: took 4.657548ms to run NodePressure ...
	I1213 00:09:04.389622  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:04.771105  177122 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:04.778853  177122 kubeadm.go:787] kubelet initialised
	I1213 00:09:04.778886  177122 kubeadm.go:788] duration metric: took 7.74816ms waiting for restarted kubelet to initialise ...
	I1213 00:09:04.778898  177122 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:04.795344  177122 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:04.323893  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1213 00:09:04.323901  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1213 00:09:04.324122  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.324168  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1213 00:09:04.324006  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:04.324031  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:04.324300  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:04.324336  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:04.324067  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1213 00:09:04.324096  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1213 00:09:04.324100  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:04.597566  177307 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:07.626684  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.302476413s)
	I1213 00:09:07.626718  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1213 00:09:07.626754  177307 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:07.626784  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (3.302394961s)
	I1213 00:09:07.626821  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1213 00:09:07.626824  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.302508593s)
	I1213 00:09:07.626859  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1213 00:09:07.626833  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1213 00:09:07.626882  177307 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.029282623s)
	I1213 00:09:07.626755  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.302393062s)
	I1213 00:09:07.626939  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1213 00:09:07.626975  177307 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1213 00:09:07.627010  177307 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:07.627072  177307 ssh_runner.go:195] Run: which crictl
	I1213 00:09:05.120691  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.121251  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.121285  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:05.121183  178294 retry.go:31] will retry after 488.195845ms: waiting for machine to come up
	I1213 00:09:05.610815  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.611226  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:05.611258  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:05.611167  178294 retry.go:31] will retry after 705.048097ms: waiting for machine to come up
	I1213 00:09:06.317891  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:06.318329  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:06.318353  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:06.318278  178294 retry.go:31] will retry after 788.420461ms: waiting for machine to come up
	I1213 00:09:07.108179  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:07.108736  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:07.108769  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:07.108696  178294 retry.go:31] will retry after 1.331926651s: waiting for machine to come up
	I1213 00:09:08.442609  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:08.443087  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:08.443114  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:08.443032  178294 retry.go:31] will retry after 1.180541408s: waiting for machine to come up
	I1213 00:09:09.625170  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:09.625722  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:09.625753  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:09.625653  178294 retry.go:31] will retry after 1.866699827s: waiting for machine to come up
	I1213 00:09:06.828008  177122 pod_ready.go:102] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:09.322889  177122 pod_ready.go:102] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:09.822883  177122 pod_ready.go:92] pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:09.822913  177122 pod_ready.go:81] duration metric: took 5.027534973s waiting for pod "coredns-5dd5756b68-5vm25" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.822927  177122 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.828990  177122 pod_ready.go:92] pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:09.829018  177122 pod_ready.go:81] duration metric: took 6.083345ms waiting for pod "coredns-5dd5756b68-6mfmr" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.829035  177122 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:09.803403  177307 ssh_runner.go:235] Completed: which crictl: (2.176302329s)
	I1213 00:09:09.803541  177307 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:09.803468  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.176578633s)
	I1213 00:09:09.803602  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1213 00:09:09.803634  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:09.803673  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1213 00:09:09.851557  177307 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 00:09:09.851690  177307 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:12.107222  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.303514888s)
	I1213 00:09:12.107284  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1213 00:09:12.107292  177307 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.255575693s)
	I1213 00:09:12.107308  177307 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:12.107336  177307 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1213 00:09:12.107363  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1213 00:09:11.494563  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:11.495148  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:11.495182  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:11.495076  178294 retry.go:31] will retry after 2.859065831s: waiting for machine to come up
	I1213 00:09:14.356328  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:14.356778  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:14.356814  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:14.356719  178294 retry.go:31] will retry after 3.506628886s: waiting for machine to come up
	I1213 00:09:11.849447  177122 pod_ready.go:102] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:14.349299  177122 pod_ready.go:102] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:14.853963  177122 pod_ready.go:92] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:14.853989  177122 pod_ready.go:81] duration metric: took 5.024945989s waiting for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.854001  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.861663  177122 pod_ready.go:92] pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:14.861685  177122 pod_ready.go:81] duration metric: took 7.676036ms waiting for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:14.861697  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:16.223090  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.115697846s)
	I1213 00:09:16.223134  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1213 00:09:16.223165  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:16.223211  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1213 00:09:17.473407  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.25017316s)
	I1213 00:09:17.473435  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1213 00:09:17.473476  177307 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:17.473552  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1213 00:09:17.864739  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:17.865213  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | unable to find current IP address of domain default-k8s-diff-port-743278 in network mk-default-k8s-diff-port-743278
	I1213 00:09:17.865237  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | I1213 00:09:17.865171  178294 retry.go:31] will retry after 2.94932643s: waiting for machine to come up
	I1213 00:09:16.884215  177122 pod_ready.go:102] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:17.383872  177122 pod_ready.go:92] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.383906  177122 pod_ready.go:81] duration metric: took 2.52219538s waiting for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.383928  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k8n7r" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.389464  177122 pod_ready.go:92] pod "kube-proxy-k8n7r" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.389482  177122 pod_ready.go:81] duration metric: took 5.547172ms waiting for pod "kube-proxy-k8n7r" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.389490  177122 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.419020  177122 pod_ready.go:92] pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:17.419047  177122 pod_ready.go:81] duration metric: took 29.549704ms waiting for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:17.419056  177122 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:19.730210  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:22.069281  176813 start.go:369] acquired machines lock for "old-k8s-version-508612" in 1m3.72259979s
	I1213 00:09:22.069359  176813 start.go:96] Skipping create...Using existing machine configuration
	I1213 00:09:22.069367  176813 fix.go:54] fixHost starting: 
	I1213 00:09:22.069812  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:22.069851  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:22.088760  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I1213 00:09:22.089211  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:22.089766  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:09:22.089795  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:22.090197  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:22.090396  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:22.090574  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:09:22.092039  176813 fix.go:102] recreateIfNeeded on old-k8s-version-508612: state=Stopped err=<nil>
	I1213 00:09:22.092064  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	W1213 00:09:22.092241  176813 fix.go:128] unexpected machine state, will restart: <nil>
	I1213 00:09:22.094310  176813 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-508612" ...
	I1213 00:09:20.817420  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.817794  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has current primary IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.817833  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Found IP for machine: 192.168.72.144
	I1213 00:09:20.817870  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Reserving static IP address...
	I1213 00:09:20.818250  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-743278", mac: "52:54:00:d1:a8:22", ip: "192.168.72.144"} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.818272  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Reserved static IP address: 192.168.72.144
	I1213 00:09:20.818286  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | skip adding static IP to network mk-default-k8s-diff-port-743278 - found existing host DHCP lease matching {name: "default-k8s-diff-port-743278", mac: "52:54:00:d1:a8:22", ip: "192.168.72.144"}
	I1213 00:09:20.818298  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Getting to WaitForSSH function...
	I1213 00:09:20.818312  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Waiting for SSH to be available...
	I1213 00:09:20.820093  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.820378  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.820409  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.820525  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Using SSH client type: external
	I1213 00:09:20.820552  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa (-rw-------)
	I1213 00:09:20.820587  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:20.820618  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | About to run SSH command:
	I1213 00:09:20.820632  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | exit 0
	I1213 00:09:20.907942  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:20.908280  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetConfigRaw
	I1213 00:09:20.909042  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:20.911222  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.911544  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.911569  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.911826  177409 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/config.json ...
	I1213 00:09:20.912048  177409 machine.go:88] provisioning docker machine ...
	I1213 00:09:20.912071  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:20.912284  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:20.912425  177409 buildroot.go:166] provisioning hostname "default-k8s-diff-port-743278"
	I1213 00:09:20.912460  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:20.912585  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:20.914727  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.915081  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:20.915113  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:20.915257  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:20.915449  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:20.915562  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:20.915671  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:20.915842  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:20.916275  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:20.916293  177409 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-743278 && echo "default-k8s-diff-port-743278" | sudo tee /etc/hostname
	I1213 00:09:21.042561  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-743278
	
	I1213 00:09:21.042606  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.045461  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.045809  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.045851  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.045957  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.046181  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.046350  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.046508  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.046685  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.047008  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.047034  177409 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-743278' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-743278/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-743278' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:21.169124  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:21.169155  177409 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:21.169175  177409 buildroot.go:174] setting up certificates
	I1213 00:09:21.169185  177409 provision.go:83] configureAuth start
	I1213 00:09:21.169194  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetMachineName
	I1213 00:09:21.169502  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:21.172929  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.173329  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.173361  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.173540  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.175847  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.176249  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.176277  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.176447  177409 provision.go:138] copyHostCerts
	I1213 00:09:21.176509  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:21.176525  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:21.176584  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:21.176677  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:21.176744  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:21.176775  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:21.176841  177409 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:21.176848  177409 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:21.176866  177409 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:21.176922  177409 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-743278 san=[192.168.72.144 192.168.72.144 localhost 127.0.0.1 minikube default-k8s-diff-port-743278]
	I1213 00:09:21.314924  177409 provision.go:172] copyRemoteCerts
	I1213 00:09:21.315003  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:21.315032  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.318149  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.318550  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.318582  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.318787  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.319005  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.319191  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.319346  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:21.409699  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:21.438626  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1213 00:09:21.468607  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:21.495376  177409 provision.go:86] duration metric: configureAuth took 326.171872ms
	I1213 00:09:21.495403  177409 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:21.495621  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:09:21.495700  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.498778  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.499247  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.499279  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.499495  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.499710  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.499877  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.500098  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.500285  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.500728  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.500751  177409 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:21.822577  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:21.822606  177409 machine.go:91] provisioned docker machine in 910.541774ms
	I1213 00:09:21.822619  177409 start.go:300] post-start starting for "default-k8s-diff-port-743278" (driver="kvm2")
	I1213 00:09:21.822632  177409 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:21.822659  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:21.823015  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:21.823044  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.825948  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.826367  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.826403  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.826577  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.826789  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.826965  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.827146  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:21.915743  177409 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:21.920142  177409 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:21.920169  177409 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:21.920249  177409 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:21.920343  177409 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:21.920474  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:21.929896  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:21.951854  177409 start.go:303] post-start completed in 129.217251ms
	I1213 00:09:21.951880  177409 fix.go:56] fixHost completed within 19.790175647s
	I1213 00:09:21.951904  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:21.954710  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.955103  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:21.955137  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:21.955352  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:21.955533  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.955685  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:21.955814  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:21.955980  177409 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:21.956485  177409 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1213 00:09:21.956505  177409 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:22.069059  177409 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426162.011062386
	
	I1213 00:09:22.069089  177409 fix.go:206] guest clock: 1702426162.011062386
	I1213 00:09:22.069100  177409 fix.go:219] Guest: 2023-12-13 00:09:22.011062386 +0000 UTC Remote: 2023-12-13 00:09:21.951884769 +0000 UTC m=+281.971624237 (delta=59.177617ms)
	I1213 00:09:22.069142  177409 fix.go:190] guest clock delta is within tolerance: 59.177617ms
	I1213 00:09:22.069153  177409 start.go:83] releasing machines lock for "default-k8s-diff-port-743278", held for 19.907486915s
	I1213 00:09:22.069191  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.069478  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:22.072371  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.072761  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.072794  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.072922  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073441  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073605  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:22.073670  177409 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:22.073719  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:22.073821  177409 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:22.073841  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:22.076233  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076550  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076703  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.076733  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.076874  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:22.077050  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:22.077080  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:22.077052  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:22.077227  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:22.077303  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:22.077540  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:22.077630  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:22.077714  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:22.077851  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:22.188131  177409 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:22.193896  177409 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:22.339227  177409 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:22.346292  177409 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:22.346366  177409 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:22.361333  177409 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:22.361364  177409 start.go:475] detecting cgroup driver to use...
	I1213 00:09:22.361438  177409 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:22.374698  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:22.387838  177409 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:22.387897  177409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:22.402969  177409 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:22.417038  177409 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:22.533130  177409 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:22.665617  177409 docker.go:219] disabling docker service ...
	I1213 00:09:22.665690  177409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:22.681327  177409 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:22.692842  177409 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:22.816253  177409 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:22.951988  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:22.967607  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:22.985092  177409 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 00:09:22.985158  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:22.994350  177409 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:22.994403  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.003372  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.012176  177409 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:23.021215  177409 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:23.031105  177409 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:23.039486  177409 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:23.039552  177409 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:23.053085  177409 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:23.062148  177409 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:23.182275  177409 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:23.357901  177409 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:23.357991  177409 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:23.364148  177409 start.go:543] Will wait 60s for crictl version
	I1213 00:09:23.364225  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:09:23.368731  177409 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:23.408194  177409 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:23.408288  177409 ssh_runner.go:195] Run: crio --version
	I1213 00:09:23.461483  177409 ssh_runner.go:195] Run: crio --version
	I1213 00:09:23.513553  177409 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1213 00:09:20.148999  177307 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.675412499s)
	I1213 00:09:20.149037  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1213 00:09:20.149073  177307 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:20.149116  177307 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1213 00:09:21.101559  177307 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 00:09:21.101608  177307 cache_images.go:123] Successfully loaded all cached images
	I1213 00:09:21.101615  177307 cache_images.go:92] LoadImages completed in 17.428934706s
	I1213 00:09:21.101694  177307 ssh_runner.go:195] Run: crio config
	I1213 00:09:21.159955  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:09:21.159978  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:21.159999  177307 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:21.160023  177307 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.181 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-143586 NodeName:no-preload-143586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:09:21.160198  177307 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-143586"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:21.160303  177307 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-143586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-143586 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:09:21.160378  177307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1213 00:09:21.170615  177307 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:21.170701  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:21.180228  177307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1213 00:09:21.198579  177307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1213 00:09:21.215096  177307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1213 00:09:21.233288  177307 ssh_runner.go:195] Run: grep 192.168.50.181	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:21.236666  177307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:21.248811  177307 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586 for IP: 192.168.50.181
	I1213 00:09:21.248847  177307 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:21.249007  177307 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:21.249058  177307 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:21.249154  177307 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.key
	I1213 00:09:21.249238  177307 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.key.8f5c2e66
	I1213 00:09:21.249291  177307 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.key
	I1213 00:09:21.249427  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:21.249468  177307 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:21.249484  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:21.249523  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:21.249559  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:21.249591  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:21.249642  177307 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:21.250517  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:21.276697  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:21.299356  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:21.322849  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:21.348145  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:21.370885  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:21.393257  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:21.418643  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:21.446333  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:21.476374  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:21.506662  177307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:21.530653  177307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:21.555129  177307 ssh_runner.go:195] Run: openssl version
	I1213 00:09:21.561174  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:21.571372  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.575988  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.576053  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:21.581633  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:21.590564  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:21.599910  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.604113  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.604160  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:21.609303  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:21.619194  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:21.628171  177307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.632419  177307 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.632494  177307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:21.638310  177307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:21.648369  177307 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:21.653143  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:21.659543  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:21.665393  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:21.670855  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:21.676290  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:21.681864  177307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:21.688162  177307 kubeadm.go:404] StartCluster: {Name:no-preload-143586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-143586 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:21.688243  177307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:21.688280  177307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:21.727451  177307 cri.go:89] found id: ""
	I1213 00:09:21.727536  177307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:21.739044  177307 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:21.739066  177307 kubeadm.go:636] restartCluster start
	I1213 00:09:21.739124  177307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:21.747328  177307 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:21.748532  177307 kubeconfig.go:92] found "no-preload-143586" server: "https://192.168.50.181:8443"
	I1213 00:09:21.751029  177307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:21.759501  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:21.759546  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:21.771029  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:21.771048  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:21.771095  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:21.782184  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:22.282507  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:22.282588  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:22.294105  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:22.783207  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:22.783266  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:22.796776  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.282325  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:23.282395  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:23.295974  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.782516  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:23.782615  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:23.797912  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:23.514911  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetIP
	I1213 00:09:23.517973  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:23.518335  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:23.518366  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:23.518566  177409 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:23.523522  177409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:23.537195  177409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1213 00:09:23.537261  177409 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:23.579653  177409 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1213 00:09:23.579729  177409 ssh_runner.go:195] Run: which lz4
	I1213 00:09:23.583956  177409 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:09:23.588686  177409 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:09:23.588720  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1213 00:09:22.095647  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Start
	I1213 00:09:22.095821  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring networks are active...
	I1213 00:09:22.096548  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring network default is active
	I1213 00:09:22.096936  176813 main.go:141] libmachine: (old-k8s-version-508612) Ensuring network mk-old-k8s-version-508612 is active
	I1213 00:09:22.097366  176813 main.go:141] libmachine: (old-k8s-version-508612) Getting domain xml...
	I1213 00:09:22.097939  176813 main.go:141] libmachine: (old-k8s-version-508612) Creating domain...
	I1213 00:09:23.423128  176813 main.go:141] libmachine: (old-k8s-version-508612) Waiting to get IP...
	I1213 00:09:23.424090  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:23.424606  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:23.424676  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:23.424588  178471 retry.go:31] will retry after 260.416347ms: waiting for machine to come up
	I1213 00:09:23.687268  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:23.687867  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:23.687902  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:23.687814  178471 retry.go:31] will retry after 377.709663ms: waiting for machine to come up
	I1213 00:09:24.067588  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.068249  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.068277  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.068177  178471 retry.go:31] will retry after 480.876363ms: waiting for machine to come up
	I1213 00:09:24.550715  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.551244  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.551278  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.551191  178471 retry.go:31] will retry after 389.885819ms: waiting for machine to come up
	I1213 00:09:24.942898  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:24.943495  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:24.943526  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:24.943443  178471 retry.go:31] will retry after 532.578432ms: waiting for machine to come up
	I1213 00:09:25.478278  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:25.478810  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:25.478845  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:25.478781  178471 retry.go:31] will retry after 599.649827ms: waiting for machine to come up
	I1213 00:09:22.230086  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:24.729105  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:24.282598  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:24.282708  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:24.298151  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:24.782530  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:24.782639  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:24.798661  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.283235  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:25.283393  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:25.297662  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.783319  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:25.783436  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:25.797129  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.282666  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:26.282789  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:26.295674  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.783065  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:26.783147  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:26.794192  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:27.282703  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:27.282775  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:27.294823  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:27.782891  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:27.782975  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:27.798440  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:28.282826  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:28.282909  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:28.293752  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:28.782264  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:28.782325  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:28.793986  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:25.524765  177409 crio.go:444] Took 1.940853 seconds to copy over tarball
	I1213 00:09:25.524843  177409 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:09:28.426493  177409 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.901618536s)
	I1213 00:09:28.426522  177409 crio.go:451] Took 2.901730 seconds to extract the tarball
	I1213 00:09:28.426533  177409 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:09:28.467170  177409 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:28.520539  177409 crio.go:496] all images are preloaded for cri-o runtime.
	I1213 00:09:28.520567  177409 cache_images.go:84] Images are preloaded, skipping loading
	I1213 00:09:28.520654  177409 ssh_runner.go:195] Run: crio config
	I1213 00:09:28.588320  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:09:28.588348  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:28.588370  177409 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:28.588395  177409 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-743278 NodeName:default-k8s-diff-port-743278 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 00:09:28.588593  177409 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-743278"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:28.588687  177409 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-743278 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1213 00:09:28.588755  177409 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1213 00:09:28.597912  177409 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:28.597987  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:28.608324  177409 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1213 00:09:28.627102  177409 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:09:28.646837  177409 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1213 00:09:28.664534  177409 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:28.668580  177409 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:28.680736  177409 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278 for IP: 192.168.72.144
	I1213 00:09:28.680777  177409 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:28.680971  177409 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:28.681037  177409 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:28.681140  177409 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.key
	I1213 00:09:28.681234  177409 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.key.1dd7f3f2
	I1213 00:09:28.681301  177409 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.key
	I1213 00:09:28.681480  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:28.681525  177409 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:28.681543  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:28.681587  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:28.681636  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:28.681681  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:28.681743  177409 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:28.682710  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:28.707852  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:28.732792  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:28.755545  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:28.779880  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:28.805502  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:28.829894  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:28.853211  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:28.877291  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:28.899870  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:28.922141  177409 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:28.945634  177409 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:28.962737  177409 ssh_runner.go:195] Run: openssl version
	I1213 00:09:28.968869  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:28.980535  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.985219  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.985284  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:28.990911  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:29.001595  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:29.012408  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.017644  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.017760  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:29.023914  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:29.034793  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:29.045825  177409 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.050538  177409 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.050584  177409 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:29.057322  177409 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:29.067993  177409 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:29.072782  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:29.078806  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:29.084744  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:29.090539  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:29.096734  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:29.102729  177409 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:29.108909  177409 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-743278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-743278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:29.109022  177409 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:29.109095  177409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:29.158003  177409 cri.go:89] found id: ""
	I1213 00:09:29.158100  177409 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:29.169464  177409 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:29.169500  177409 kubeadm.go:636] restartCluster start
	I1213 00:09:29.169555  177409 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:29.180347  177409 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.181609  177409 kubeconfig.go:92] found "default-k8s-diff-port-743278" server: "https://192.168.72.144:8444"
	I1213 00:09:29.184377  177409 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:29.193593  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.193645  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.205447  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.205465  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.205519  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.221169  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.721729  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.721835  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.735942  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:26.080407  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:26.081034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:26.081061  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:26.080973  178471 retry.go:31] will retry after 1.103545811s: waiting for machine to come up
	I1213 00:09:27.186673  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:27.187208  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:27.187241  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:27.187152  178471 retry.go:31] will retry after 977.151221ms: waiting for machine to come up
	I1213 00:09:28.165799  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:28.166219  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:28.166257  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:28.166166  178471 retry.go:31] will retry after 1.27451971s: waiting for machine to come up
	I1213 00:09:29.441683  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:29.442203  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:29.442240  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:29.442122  178471 retry.go:31] will retry after 1.620883976s: waiting for machine to come up
	I1213 00:09:26.733297  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:29.624623  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:29.282975  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.621544  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.632749  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:29.783112  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:29.783214  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:29.794919  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.282457  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.282528  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.293852  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.782400  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.782499  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.797736  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.282276  177307 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.282367  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.298115  177307 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.759957  177307 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:09:31.760001  177307 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:09:31.760013  177307 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:09:31.760078  177307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:31.799045  177307 cri.go:89] found id: ""
	I1213 00:09:31.799146  177307 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:09:31.813876  177307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:09:31.823305  177307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:09:31.823382  177307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:31.831741  177307 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:31.831767  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:31.961871  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:32.826330  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.045107  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.119065  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:33.187783  177307 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:09:33.187887  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:33.217142  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:33.735695  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:34.236063  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:30.221906  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.230723  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.243849  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:30.721380  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:30.721492  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:30.734401  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.222026  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.222150  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.235400  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.722107  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:31.722189  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:31.735415  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:32.222216  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:32.222365  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:32.238718  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:32.721270  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:32.721389  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:32.735677  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:33.222261  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:33.222329  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:33.243918  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:33.721349  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:33.721438  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:33.738138  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:34.221645  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:34.221748  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:34.238845  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:34.721320  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:34.721390  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:34.738271  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:31.065065  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:31.065494  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:31.065528  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:31.065436  178471 retry.go:31] will retry after 2.452686957s: waiting for machine to come up
	I1213 00:09:33.519937  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:33.520505  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:33.520537  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:33.520468  178471 retry.go:31] will retry after 2.830999713s: waiting for machine to come up
	I1213 00:09:31.729101  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:34.229143  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:34.735218  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.235570  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.736120  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:35.764916  177307 api_server.go:72] duration metric: took 2.577131698s to wait for apiserver process to appear ...
	I1213 00:09:35.764942  177307 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:09:35.764971  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.765820  177307 api_server.go:269] stopped: https://192.168.50.181:8443/healthz: Get "https://192.168.50.181:8443/healthz": dial tcp 192.168.50.181:8443: connect: connection refused
	I1213 00:09:35.765860  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.766257  177307 api_server.go:269] stopped: https://192.168.50.181:8443/healthz: Get "https://192.168.50.181:8443/healthz": dial tcp 192.168.50.181:8443: connect: connection refused
	I1213 00:09:36.266842  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:35.221935  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:35.222069  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:35.240609  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:35.721801  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:35.721965  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:35.765295  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:36.221944  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:36.222021  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:36.238211  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:36.721750  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:36.721830  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:36.736765  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:37.221936  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:37.222185  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:37.238002  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:37.721304  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:37.721385  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:37.734166  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:38.221603  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:38.221701  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:38.235174  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:38.721704  177409 api_server.go:166] Checking apiserver status ...
	I1213 00:09:38.721794  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:38.735963  177409 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:39.193664  177409 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:09:39.193713  177409 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:09:39.193727  177409 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:09:39.193787  177409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:39.238262  177409 cri.go:89] found id: ""
	I1213 00:09:39.238336  177409 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:09:39.258625  177409 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:09:39.271127  177409 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:09:39.271196  177409 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:39.280870  177409 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:09:39.280906  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:39.399746  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:36.353967  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:36.354453  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:36.354481  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:36.354415  178471 retry.go:31] will retry after 2.983154328s: waiting for machine to come up
	I1213 00:09:39.341034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:39.341497  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | unable to find current IP address of domain old-k8s-version-508612 in network mk-old-k8s-version-508612
	I1213 00:09:39.341526  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | I1213 00:09:39.341462  178471 retry.go:31] will retry after 3.436025657s: waiting for machine to come up
	I1213 00:09:36.230811  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:38.729730  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:40.732654  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:39.693843  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:39.693877  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:39.693896  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:39.767118  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:39.767153  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:39.767169  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:39.787684  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:39.787725  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:40.267069  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:40.272416  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:40.272464  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:40.766651  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:40.799906  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:40.799942  177307 api_server.go:103] status: https://192.168.50.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:41.266411  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:09:41.271259  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 200:
	ok
	I1213 00:09:41.278691  177307 api_server.go:141] control plane version: v1.29.0-rc.2
	I1213 00:09:41.278715  177307 api_server.go:131] duration metric: took 5.51376527s to wait for apiserver health ...
	I1213 00:09:41.278725  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:09:41.278732  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:41.280473  177307 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:41.281924  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:41.308598  177307 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:41.330367  177307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:41.342017  177307 system_pods.go:59] 8 kube-system pods found
	I1213 00:09:41.342048  177307 system_pods.go:61] "coredns-76f75df574-87nc6" [829c7a44-85a0-4ed0-b98a-b5016aa04b97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:41.342054  177307 system_pods.go:61] "etcd-no-preload-143586" [b50e57af-530a-4689-bf42-a9f74fa6bea1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:41.342065  177307 system_pods.go:61] "kube-apiserver-no-preload-143586" [3aed4b84-e029-433a-8394-f99608b52edd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:41.342071  177307 system_pods.go:61] "kube-controller-manager-no-preload-143586" [f88e182a-0a81-4c7b-b2b3-d6911baf340f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:41.342080  177307 system_pods.go:61] "kube-proxy-8k9x6" [a71d2257-2012-4d0d-948d-d69c0c04bd2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:41.342086  177307 system_pods.go:61] "kube-scheduler-no-preload-143586" [dfb7b176-fbf8-4542-890f-1eba0f699b96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:41.342098  177307 system_pods.go:61] "metrics-server-57f55c9bc5-px5lm" [25b8b500-0ad0-4da3-8f7f-d8c46a848e8a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:41.342106  177307 system_pods.go:61] "storage-provisioner" [bb18a95a-ed99-43f7-bc6f-333e0b86cacc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:41.342114  177307 system_pods.go:74] duration metric: took 11.726461ms to wait for pod list to return data ...
	I1213 00:09:41.342132  177307 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:41.345985  177307 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:41.346011  177307 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:41.346021  177307 node_conditions.go:105] duration metric: took 3.884209ms to run NodePressure ...
	I1213 00:09:41.346038  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:41.682789  177307 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:41.690867  177307 kubeadm.go:787] kubelet initialised
	I1213 00:09:41.690892  177307 kubeadm.go:788] duration metric: took 8.076203ms waiting for restarted kubelet to initialise ...
	I1213 00:09:41.690902  177307 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:41.698622  177307 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-87nc6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:43.720619  177307 pod_ready.go:102] pod "coredns-76f75df574-87nc6" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:40.471390  177409 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.071602244s)
	I1213 00:09:40.471425  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.665738  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.786290  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:40.859198  177409 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:09:40.859302  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:40.887488  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:41.406398  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:41.906653  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:42.405784  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:42.906462  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:43.406489  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:09:43.432933  177409 api_server.go:72] duration metric: took 2.573735322s to wait for apiserver process to appear ...
	I1213 00:09:43.432975  177409 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:09:43.432997  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:43.433588  177409 api_server.go:269] stopped: https://192.168.72.144:8444/healthz: Get "https://192.168.72.144:8444/healthz": dial tcp 192.168.72.144:8444: connect: connection refused
	I1213 00:09:43.433641  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:43.434089  177409 api_server.go:269] stopped: https://192.168.72.144:8444/healthz: Get "https://192.168.72.144:8444/healthz": dial tcp 192.168.72.144:8444: connect: connection refused
	I1213 00:09:43.934469  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:42.779498  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.779971  176813 main.go:141] libmachine: (old-k8s-version-508612) Found IP for machine: 192.168.39.70
	I1213 00:09:42.779993  176813 main.go:141] libmachine: (old-k8s-version-508612) Reserving static IP address...
	I1213 00:09:42.780011  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has current primary IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.780466  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "old-k8s-version-508612", mac: "52:54:00:dd:da:91", ip: "192.168.39.70"} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.780504  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | skip adding static IP to network mk-old-k8s-version-508612 - found existing host DHCP lease matching {name: "old-k8s-version-508612", mac: "52:54:00:dd:da:91", ip: "192.168.39.70"}
	I1213 00:09:42.780524  176813 main.go:141] libmachine: (old-k8s-version-508612) Reserved static IP address: 192.168.39.70
	I1213 00:09:42.780547  176813 main.go:141] libmachine: (old-k8s-version-508612) Waiting for SSH to be available...
	I1213 00:09:42.780559  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Getting to WaitForSSH function...
	I1213 00:09:42.783019  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.783434  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.783482  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.783566  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Using SSH client type: external
	I1213 00:09:42.783598  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Using SSH private key: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa (-rw-------)
	I1213 00:09:42.783638  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1213 00:09:42.783661  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | About to run SSH command:
	I1213 00:09:42.783681  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | exit 0
	I1213 00:09:42.885148  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | SSH cmd err, output: <nil>: 
	I1213 00:09:42.885690  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetConfigRaw
	I1213 00:09:42.886388  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:42.889440  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.889898  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.889937  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.890209  176813 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/config.json ...
	I1213 00:09:42.890423  176813 machine.go:88] provisioning docker machine ...
	I1213 00:09:42.890444  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:42.890685  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:42.890874  176813 buildroot.go:166] provisioning hostname "old-k8s-version-508612"
	I1213 00:09:42.890899  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:42.891039  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:42.893678  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.894021  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:42.894051  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:42.894174  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:42.894391  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:42.894556  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:42.894720  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:42.894909  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:42.895383  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:42.895406  176813 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-508612 && echo "old-k8s-version-508612" | sudo tee /etc/hostname
	I1213 00:09:43.045290  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-508612
	
	I1213 00:09:43.045345  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.048936  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.049438  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.049476  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.049662  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.049877  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.050074  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.050231  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.050413  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:43.050888  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:43.050919  176813 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-508612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-508612/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-508612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 00:09:43.183021  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 00:09:43.183061  176813 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17777-136241/.minikube CaCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17777-136241/.minikube}
	I1213 00:09:43.183089  176813 buildroot.go:174] setting up certificates
	I1213 00:09:43.183102  176813 provision.go:83] configureAuth start
	I1213 00:09:43.183115  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetMachineName
	I1213 00:09:43.183467  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:43.186936  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.187409  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.187441  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.187620  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.190125  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.190560  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.190612  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.190775  176813 provision.go:138] copyHostCerts
	I1213 00:09:43.190842  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem, removing ...
	I1213 00:09:43.190861  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem
	I1213 00:09:43.190936  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/ca.pem (1082 bytes)
	I1213 00:09:43.191113  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem, removing ...
	I1213 00:09:43.191126  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem
	I1213 00:09:43.191158  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/cert.pem (1123 bytes)
	I1213 00:09:43.191245  176813 exec_runner.go:144] found /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem, removing ...
	I1213 00:09:43.191256  176813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem
	I1213 00:09:43.191284  176813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17777-136241/.minikube/key.pem (1679 bytes)
	I1213 00:09:43.191354  176813 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-508612 san=[192.168.39.70 192.168.39.70 localhost 127.0.0.1 minikube old-k8s-version-508612]
	I1213 00:09:43.321927  176813 provision.go:172] copyRemoteCerts
	I1213 00:09:43.321999  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 00:09:43.322039  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.325261  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.325653  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.325686  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.325920  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.326128  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.326300  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.326474  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:43.420656  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 00:09:43.445997  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 00:09:43.471466  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 00:09:43.500104  176813 provision.go:86] duration metric: configureAuth took 316.983913ms
	I1213 00:09:43.500137  176813 buildroot.go:189] setting minikube options for container-runtime
	I1213 00:09:43.500380  176813 config.go:182] Loaded profile config "old-k8s-version-508612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1213 00:09:43.500554  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.503567  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.503994  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.504034  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.504320  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.504551  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.504797  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.504978  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.505164  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:43.505640  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:43.505668  176813 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 00:09:43.859639  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 00:09:43.859723  176813 machine.go:91] provisioned docker machine in 969.28446ms
	I1213 00:09:43.859741  176813 start.go:300] post-start starting for "old-k8s-version-508612" (driver="kvm2")
	I1213 00:09:43.859754  176813 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 00:09:43.859781  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:43.860174  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 00:09:43.860207  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:43.863407  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.863903  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:43.863944  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:43.864142  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:43.864340  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:43.864604  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:43.864907  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:43.957616  176813 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 00:09:43.963381  176813 info.go:137] Remote host: Buildroot 2021.02.12
	I1213 00:09:43.963413  176813 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/addons for local assets ...
	I1213 00:09:43.963489  176813 filesync.go:126] Scanning /home/jenkins/minikube-integration/17777-136241/.minikube/files for local assets ...
	I1213 00:09:43.963594  176813 filesync.go:149] local asset: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem -> 1435412.pem in /etc/ssl/certs
	I1213 00:09:43.963710  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 00:09:43.972902  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:44.001469  176813 start.go:303] post-start completed in 141.706486ms
	I1213 00:09:44.001503  176813 fix.go:56] fixHost completed within 21.932134773s
	I1213 00:09:44.001532  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.004923  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.005334  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.005410  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.005545  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.005846  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.006067  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.006198  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.006401  176813 main.go:141] libmachine: Using SSH client type: native
	I1213 00:09:44.006815  176813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1213 00:09:44.006841  176813 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1213 00:09:44.134363  176813 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702426184.079167065
	
	I1213 00:09:44.134389  176813 fix.go:206] guest clock: 1702426184.079167065
	I1213 00:09:44.134398  176813 fix.go:219] Guest: 2023-12-13 00:09:44.079167065 +0000 UTC Remote: 2023-12-13 00:09:44.001508908 +0000 UTC m=+368.244893563 (delta=77.658157ms)
	I1213 00:09:44.134434  176813 fix.go:190] guest clock delta is within tolerance: 77.658157ms
	I1213 00:09:44.134446  176813 start.go:83] releasing machines lock for "old-k8s-version-508612", held for 22.06510734s
	I1213 00:09:44.134469  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.134760  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:44.137820  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.138245  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.138275  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.138444  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.138957  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.139152  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:09:44.139229  176813 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 00:09:44.139300  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.139358  176813 ssh_runner.go:195] Run: cat /version.json
	I1213 00:09:44.139383  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:09:44.142396  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.142920  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.142981  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143041  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143197  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.143340  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.143473  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.143487  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:44.143505  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:44.143633  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:09:44.143628  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:44.143786  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:09:44.143913  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:09:44.144041  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:09:44.235010  176813 ssh_runner.go:195] Run: systemctl --version
	I1213 00:09:44.263174  176813 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 00:09:44.424330  176813 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 00:09:44.433495  176813 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 00:09:44.433573  176813 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 00:09:44.454080  176813 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 00:09:44.454106  176813 start.go:475] detecting cgroup driver to use...
	I1213 00:09:44.454173  176813 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 00:09:44.482370  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 00:09:44.499334  176813 docker.go:203] disabling cri-docker service (if available) ...
	I1213 00:09:44.499429  176813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 00:09:44.516413  176813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 00:09:44.529636  176813 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 00:09:44.638215  176813 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 00:09:44.774229  176813 docker.go:219] disabling docker service ...
	I1213 00:09:44.774304  176813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 00:09:44.790414  176813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 00:09:44.804909  176813 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 00:09:44.938205  176813 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 00:09:45.069429  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 00:09:45.085783  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 00:09:45.105487  176813 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1213 00:09:45.105558  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.117662  176813 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 00:09:45.117789  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.129560  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.139168  176813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 00:09:45.148466  176813 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 00:09:45.157626  176813 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 00:09:45.166608  176813 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 00:09:45.166675  176813 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 00:09:45.179666  176813 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 00:09:45.190356  176813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 00:09:45.366019  176813 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 00:09:45.549130  176813 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 00:09:45.549209  176813 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 00:09:45.554753  176813 start.go:543] Will wait 60s for crictl version
	I1213 00:09:45.554809  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:45.559452  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 00:09:45.605106  176813 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1213 00:09:45.605180  176813 ssh_runner.go:195] Run: crio --version
	I1213 00:09:45.654428  176813 ssh_runner.go:195] Run: crio --version
	I1213 00:09:45.711107  176813 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1213 00:09:45.712598  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetIP
	I1213 00:09:45.716022  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:45.716371  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:09:45.716405  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:09:45.716751  176813 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 00:09:45.722339  176813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:45.739528  176813 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1213 00:09:45.739594  176813 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:45.786963  176813 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1213 00:09:45.787044  176813 ssh_runner.go:195] Run: which lz4
	I1213 00:09:45.791462  176813 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1213 00:09:45.795923  176813 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 00:09:45.795952  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1213 00:09:43.228658  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:45.231385  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:45.721999  177307 pod_ready.go:92] pod "coredns-76f75df574-87nc6" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:45.722026  177307 pod_ready.go:81] duration metric: took 4.023377357s waiting for pod "coredns-76f75df574-87nc6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:45.722038  177307 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:47.744891  177307 pod_ready.go:102] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:48.255190  177307 pod_ready.go:92] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:48.255220  177307 pod_ready.go:81] duration metric: took 2.533174326s waiting for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.255233  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.263450  177307 pod_ready.go:92] pod "kube-apiserver-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:48.263477  177307 pod_ready.go:81] duration metric: took 8.236475ms waiting for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.263489  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:48.212975  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:48.213009  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:48.213033  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.303921  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:09:48.303963  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:09:48.435167  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.442421  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:48.442455  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:48.934740  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:48.941126  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:48.941161  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:49.434967  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:49.444960  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1213 00:09:49.445016  177409 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1213 00:09:49.935234  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:09:49.941400  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I1213 00:09:49.951057  177409 api_server.go:141] control plane version: v1.28.4
	I1213 00:09:49.951094  177409 api_server.go:131] duration metric: took 6.518109828s to wait for apiserver health ...
	I1213 00:09:49.951105  177409 cni.go:84] Creating CNI manager for ""
	I1213 00:09:49.951115  177409 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:49.953198  177409 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:09:49.954914  177409 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:09:49.989291  177409 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:09:47.527308  176813 crio.go:444] Took 1.735860 seconds to copy over tarball
	I1213 00:09:47.527390  176813 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 00:09:50.641162  176813 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113740813s)
	I1213 00:09:50.641195  176813 crio.go:451] Took 3.113856 seconds to extract the tarball
	I1213 00:09:50.641208  176813 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 00:09:50.683194  176813 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 00:09:50.729476  176813 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1213 00:09:50.729503  176813 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 00:09:50.729574  176813 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.729602  176813 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1213 00:09:50.729611  176813 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.729617  176813 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:50.729653  176813 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.729605  176813 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.729572  176813 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:50.729589  176813 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.730849  176813 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:50.730908  176813 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.730924  176813 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1213 00:09:50.730968  176813 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.730986  176813 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.730997  176813 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.730847  176813 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.731163  176813 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:47.235674  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:49.728030  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:50.051886  177409 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:09:50.069774  177409 system_pods.go:59] 8 kube-system pods found
	I1213 00:09:50.069817  177409 system_pods.go:61] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 00:09:50.069834  177409 system_pods.go:61] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 00:09:50.069849  177409 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 00:09:50.069862  177409 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 00:09:50.069875  177409 system_pods.go:61] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 00:09:50.069887  177409 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 00:09:50.069907  177409 system_pods.go:61] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:09:50.069919  177409 system_pods.go:61] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 00:09:50.069932  177409 system_pods.go:74] duration metric: took 18.020213ms to wait for pod list to return data ...
	I1213 00:09:50.069944  177409 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:09:50.073659  177409 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:09:50.073688  177409 node_conditions.go:123] node cpu capacity is 2
	I1213 00:09:50.073701  177409 node_conditions.go:105] duration metric: took 3.752016ms to run NodePressure ...
	I1213 00:09:50.073722  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:09:50.545413  177409 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:09:50.559389  177409 kubeadm.go:787] kubelet initialised
	I1213 00:09:50.559421  177409 kubeadm.go:788] duration metric: took 13.971205ms waiting for restarted kubelet to initialise ...
	I1213 00:09:50.559442  177409 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:50.568069  177409 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.580294  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.580327  177409 pod_ready.go:81] duration metric: took 12.225698ms waiting for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.580340  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.580348  177409 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.588859  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.588893  177409 pod_ready.go:81] duration metric: took 8.526992ms waiting for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.588909  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.588917  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.609726  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.609759  177409 pod_ready.go:81] duration metric: took 20.834011ms waiting for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.609773  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.609781  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.626724  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.626757  177409 pod_ready.go:81] duration metric: took 16.966751ms waiting for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.626770  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.626777  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:50.950893  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-proxy-zk4wl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.950927  177409 pod_ready.go:81] duration metric: took 324.143266ms waiting for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:50.950939  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-proxy-zk4wl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:50.950948  177409 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:51.465200  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:51.465227  177409 pod_ready.go:81] duration metric: took 514.267219ms waiting for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:51.465242  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:51.465251  177409 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:52.111655  177409 pod_ready.go:97] node "default-k8s-diff-port-743278" hosting pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:52.111690  177409 pod_ready.go:81] duration metric: took 646.423162ms waiting for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	E1213 00:09:52.111707  177409 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-743278" hosting pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:52.111716  177409 pod_ready.go:38] duration metric: took 1.552263211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:52.111735  177409 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:09:52.125125  177409 ops.go:34] apiserver oom_adj: -16
	I1213 00:09:52.125152  177409 kubeadm.go:640] restartCluster took 22.955643397s
	I1213 00:09:52.125175  177409 kubeadm.go:406] StartCluster complete in 23.016262726s
	I1213 00:09:52.125204  177409 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:52.125379  177409 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:09:52.128126  177409 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:52.226763  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:09:52.226947  177409 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:09:52.227030  177409 config.go:182] Loaded profile config "default-k8s-diff-port-743278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:09:52.227060  177409 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227071  177409 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227082  177409 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-743278"
	I1213 00:09:52.227088  177409 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-743278"
	W1213 00:09:52.227092  177409 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:09:52.227115  177409 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-743278"
	I1213 00:09:52.227154  177409 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-743278"
	I1213 00:09:52.227165  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	W1213 00:09:52.227173  177409 addons.go:240] addon metrics-server should already be in state true
	I1213 00:09:52.227252  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	I1213 00:09:52.227633  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227667  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.227633  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227698  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.227728  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.227794  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.500974  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I1213 00:09:52.501503  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.502103  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.502130  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.502518  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.503096  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.503120  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
	I1213 00:09:52.503173  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.503249  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I1213 00:09:52.503460  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.503653  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.503952  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.503979  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.504117  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.504137  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.504326  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.504485  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.504680  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.504910  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.504957  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.508425  177409 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-743278"
	W1213 00:09:52.508466  177409 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:09:52.508495  177409 host.go:66] Checking if "default-k8s-diff-port-743278" exists ...
	I1213 00:09:52.508941  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:52.508989  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:52.520593  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35027
	I1213 00:09:52.521055  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37697
	I1213 00:09:52.521104  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.521443  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.521602  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.521630  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.521891  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:52.521917  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:52.521956  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.522162  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.522300  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:52.522506  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:52.523942  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:52.524208  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I1213 00:09:52.524419  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:52.612780  177409 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:09:52.524612  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:52.755661  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:09:52.941509  177409 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:52.941551  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:09:53.149407  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:52.881597  177409 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-743278" context rescaled to 1 replicas
	I1213 00:09:53.149472  177409 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:09:53.149496  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:09:52.884700  177409 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1213 00:09:52.756216  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:53.149523  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:53.149532  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:53.149484  177409 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:09:53.150147  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:53.153109  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.153288  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.360880  177409 out.go:177] * Verifying Kubernetes components...
	I1213 00:09:53.153717  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.153952  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.361036  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:50.301405  177307 pod_ready.go:102] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:52.803001  177307 pod_ready.go:102] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:53.361074  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:53.466451  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.361087  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.361322  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.466546  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:09:53.361364  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.361590  177409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:09:53.466661  177409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:09:53.466906  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.466963  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.467166  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.467266  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.489618  177409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41785
	I1213 00:09:53.490349  177409 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:09:53.490932  177409 main.go:141] libmachine: Using API Version  1
	I1213 00:09:53.490951  177409 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:09:53.491365  177409 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:09:53.491579  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetState
	I1213 00:09:53.494223  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .DriverName
	I1213 00:09:53.495774  177409 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:09:53.495796  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:09:53.495816  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHHostname
	I1213 00:09:53.499620  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.500099  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:a8:22", ip: ""} in network mk-default-k8s-diff-port-743278: {Iface:virbr3 ExpiryTime:2023-12-13 01:09:15 +0000 UTC Type:0 Mac:52:54:00:d1:a8:22 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-743278 Clientid:01:52:54:00:d1:a8:22}
	I1213 00:09:53.500124  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | domain default-k8s-diff-port-743278 has defined IP address 192.168.72.144 and MAC address 52:54:00:d1:a8:22 in network mk-default-k8s-diff-port-743278
	I1213 00:09:53.500405  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHPort
	I1213 00:09:53.500592  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHKeyPath
	I1213 00:09:53.500734  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .GetSSHUsername
	I1213 00:09:53.501069  177409 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/default-k8s-diff-port-743278/id_rsa Username:docker}
	I1213 00:09:53.667878  177409 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-743278" to be "Ready" ...
	I1213 00:09:53.806167  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:09:53.806194  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:09:53.807837  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:09:53.808402  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:09:53.915171  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:09:53.915199  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:09:53.993146  177409 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:09:53.993172  177409 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:09:54.071008  177409 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:09:50.865405  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.866538  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:50.867587  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1213 00:09:50.871289  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:50.872472  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1213 00:09:50.878541  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:50.882665  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:50.978405  176813 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1213 00:09:50.978458  176813 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:50.978527  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.038778  176813 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1213 00:09:51.038824  176813 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:51.038877  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.048868  176813 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1213 00:09:51.048925  176813 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1213 00:09:51.048983  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.054956  176813 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1213 00:09:51.055003  176813 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:51.055045  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.055045  176813 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1213 00:09:51.055133  176813 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1213 00:09:51.055162  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.069915  176813 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1213 00:09:51.069971  176813 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:51.070018  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.073904  176813 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1213 00:09:51.073955  176813 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:51.073990  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1213 00:09:51.074058  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1213 00:09:51.073997  176813 ssh_runner.go:195] Run: which crictl
	I1213 00:09:51.074127  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1213 00:09:51.074173  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1213 00:09:51.074270  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1213 00:09:51.076866  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1213 00:09:51.216889  176813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1213 00:09:51.217032  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1213 00:09:51.217046  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1213 00:09:51.217118  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1213 00:09:51.217157  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1213 00:09:51.217213  176813 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.217804  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1213 00:09:51.217887  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1213 00:09:51.224310  176813 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1213 00:09:51.224329  176813 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.224373  176813 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1213 00:09:51.270398  176813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1213 00:09:51.651719  176813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:09:53.599238  176813 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.374835203s)
	I1213 00:09:53.599269  176813 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1213 00:09:53.599323  176813 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.947557973s)
	I1213 00:09:53.599398  176813 cache_images.go:92] LoadImages completed in 2.869881827s
	W1213 00:09:53.599497  176813 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17777-136241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1213 00:09:53.599587  176813 ssh_runner.go:195] Run: crio config
	I1213 00:09:53.669735  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:09:53.669767  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:09:53.669792  176813 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1213 00:09:53.669814  176813 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-508612 NodeName:old-k8s-version-508612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1213 00:09:53.669991  176813 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-508612"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-508612
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.70:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 00:09:53.670076  176813 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-508612 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-508612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1213 00:09:53.670138  176813 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1213 00:09:53.680033  176813 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 00:09:53.680120  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 00:09:53.689595  176813 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1213 00:09:53.707167  176813 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 00:09:53.726978  176813 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1213 00:09:53.746191  176813 ssh_runner.go:195] Run: grep 192.168.39.70	control-plane.minikube.internal$ /etc/hosts
	I1213 00:09:53.750290  176813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 00:09:53.763369  176813 certs.go:56] Setting up /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612 for IP: 192.168.39.70
	I1213 00:09:53.763407  176813 certs.go:190] acquiring lock for shared ca certs: {Name:mk24dc53fdaf9923529fa780774754a10c47f0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:09:53.763598  176813 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key
	I1213 00:09:53.763671  176813 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key
	I1213 00:09:53.763776  176813 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.key
	I1213 00:09:53.763855  176813 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.key.5467de6f
	I1213 00:09:53.763914  176813 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.key
	I1213 00:09:53.764055  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem (1338 bytes)
	W1213 00:09:53.764098  176813 certs.go:433] ignoring /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541_empty.pem, impossibly tiny 0 bytes
	I1213 00:09:53.764115  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 00:09:53.764158  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/ca.pem (1082 bytes)
	I1213 00:09:53.764195  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/cert.pem (1123 bytes)
	I1213 00:09:53.764238  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/certs/home/jenkins/minikube-integration/17777-136241/.minikube/certs/key.pem (1679 bytes)
	I1213 00:09:53.764297  176813 certs.go:437] found cert: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem (1708 bytes)
	I1213 00:09:53.765315  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1213 00:09:53.793100  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 00:09:53.821187  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 00:09:53.847791  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 00:09:53.873741  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 00:09:53.903484  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 00:09:53.930420  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 00:09:53.958706  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 00:09:53.986236  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 00:09:54.011105  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/certs/143541.pem --> /usr/share/ca-certificates/143541.pem (1338 bytes)
	I1213 00:09:54.034546  176813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/ssl/certs/1435412.pem --> /usr/share/ca-certificates/1435412.pem (1708 bytes)
	I1213 00:09:54.070680  176813 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 00:09:54.093063  176813 ssh_runner.go:195] Run: openssl version
	I1213 00:09:54.100686  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 00:09:54.114647  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.121380  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.121463  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 00:09:54.128895  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 00:09:54.142335  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/143541.pem && ln -fs /usr/share/ca-certificates/143541.pem /etc/ssl/certs/143541.pem"
	I1213 00:09:54.155146  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.159746  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 23:06 /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.159817  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/143541.pem
	I1213 00:09:54.166153  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/143541.pem /etc/ssl/certs/51391683.0"
	I1213 00:09:54.176190  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1435412.pem && ln -fs /usr/share/ca-certificates/1435412.pem /etc/ssl/certs/1435412.pem"
	I1213 00:09:54.187049  176813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.191667  176813 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 23:06 /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.191737  176813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1435412.pem
	I1213 00:09:54.197335  176813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1435412.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 00:09:54.208790  176813 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1213 00:09:54.213230  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 00:09:54.219377  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 00:09:54.225539  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 00:09:54.232970  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 00:09:54.240720  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 00:09:54.247054  176813 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 00:09:54.253486  176813 kubeadm.go:404] StartCluster: {Name:old-k8s-version-508612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-508612 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1213 00:09:54.253600  176813 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 00:09:54.253674  176813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:09:54.303024  176813 cri.go:89] found id: ""
	I1213 00:09:54.303102  176813 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 00:09:54.317795  176813 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1213 00:09:54.317827  176813 kubeadm.go:636] restartCluster start
	I1213 00:09:54.317884  176813 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 00:09:54.331180  176813 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.332572  176813 kubeconfig.go:92] found "old-k8s-version-508612" server: "https://192.168.39.70:8443"
	I1213 00:09:54.335079  176813 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 00:09:54.346247  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.346292  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.362692  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.362720  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.362776  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.377570  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:54.878307  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:54.878384  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:54.891159  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:55.377679  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:55.377789  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:55.392860  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:52.229764  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:54.232636  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:55.162034  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.354143542s)
	I1213 00:09:55.162091  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.162103  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.162486  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.162503  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.162519  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.162536  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.162554  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.162887  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.162916  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.162961  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.255531  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.255561  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.255844  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.255867  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.686976  177409 node_ready.go:58] node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:55.814831  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.006392676s)
	I1213 00:09:55.814885  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.814905  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.815237  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.815300  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.815315  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.815325  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.815675  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.815693  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.815721  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.959447  177409 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.88836869s)
	I1213 00:09:55.959502  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.959519  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.959909  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.959931  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.959941  177409 main.go:141] libmachine: Making call to close driver server
	I1213 00:09:55.959943  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.959950  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) Calling .Close
	I1213 00:09:55.960189  177409 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:09:55.960205  177409 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:09:55.960223  177409 main.go:141] libmachine: (default-k8s-diff-port-743278) DBG | Closing plugin on server side
	I1213 00:09:55.960226  177409 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-743278"
	I1213 00:09:55.962464  177409 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1213 00:09:54.302018  177307 pod_ready.go:92] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.302047  177307 pod_ready.go:81] duration metric: took 6.038549186s waiting for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.302061  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8k9x6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.308192  177307 pod_ready.go:92] pod "kube-proxy-8k9x6" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.308220  177307 pod_ready.go:81] duration metric: took 6.150452ms waiting for pod "kube-proxy-8k9x6" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.308233  177307 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.829614  177307 pod_ready.go:92] pod "kube-scheduler-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:54.829639  177307 pod_ready.go:81] duration metric: took 521.398817ms waiting for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:54.829649  177307 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:56.842731  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:55.964691  177409 addons.go:502] enable addons completed in 3.737755135s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1213 00:09:58.183398  177409 node_ready.go:58] node "default-k8s-diff-port-743278" has status "Ready":"False"
	I1213 00:09:58.683603  177409 node_ready.go:49] node "default-k8s-diff-port-743278" has status "Ready":"True"
	I1213 00:09:58.683629  177409 node_ready.go:38] duration metric: took 5.01572337s waiting for node "default-k8s-diff-port-743278" to be "Ready" ...
	I1213 00:09:58.683638  177409 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:09:58.692636  177409 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:58.699084  177409 pod_ready.go:92] pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace has status "Ready":"True"
	I1213 00:09:58.699103  177409 pod_ready.go:81] duration metric: took 6.437856ms waiting for pod "coredns-5dd5756b68-ftv9l" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:58.699111  177409 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:09:55.877904  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:55.877977  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:55.893729  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.377737  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:56.377803  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:56.389754  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.878464  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:56.878530  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:56.891849  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:57.377841  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:57.377929  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:57.389962  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:57.878384  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:57.878464  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:57.892518  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:58.378033  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:58.378119  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:58.391780  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:58.878309  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:58.878397  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:58.890677  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:59.378117  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:59.378239  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:59.390695  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:59.878240  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:09:59.878318  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:09:59.889688  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:00.378278  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:00.378376  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:00.390756  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:09:56.727591  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:58.729633  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:09:59.343431  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:01.344195  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:03.842943  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:00.718294  177409 pod_ready.go:102] pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:01.216472  177409 pod_ready.go:92] pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.216499  177409 pod_ready.go:81] duration metric: took 2.517381433s waiting for pod "etcd-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.216513  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.221993  177409 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.222016  177409 pod_ready.go:81] duration metric: took 5.495703ms waiting for pod "kube-apiserver-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.222026  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.227513  177409 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.227543  177409 pod_ready.go:81] duration metric: took 5.506889ms waiting for pod "kube-controller-manager-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.227555  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.485096  177409 pod_ready.go:92] pod "kube-proxy-zk4wl" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.485120  177409 pod_ready.go:81] duration metric: took 257.55839ms waiting for pod "kube-proxy-zk4wl" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.485131  177409 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.886812  177409 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:01.886843  177409 pod_ready.go:81] duration metric: took 401.704296ms waiting for pod "kube-scheduler-default-k8s-diff-port-743278" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:01.886860  177409 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:04.192658  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:00.878385  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:00.878514  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:00.891279  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:01.378010  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:01.378120  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:01.389897  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:01.878496  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:01.878581  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:01.890674  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:02.377657  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:02.377767  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:02.389165  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:02.877744  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:02.877886  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:02.889536  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:03.378083  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:03.378206  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:03.390009  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:03.878637  176813 api_server.go:166] Checking apiserver status ...
	I1213 00:10:03.878757  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 00:10:03.891565  176813 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 00:10:04.347244  176813 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1213 00:10:04.347324  176813 kubeadm.go:1135] stopping kube-system containers ...
	I1213 00:10:04.347339  176813 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 00:10:04.347431  176813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 00:10:04.391480  176813 cri.go:89] found id: ""
	I1213 00:10:04.391558  176813 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 00:10:04.407659  176813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:10:04.416545  176813 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:10:04.416616  176813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:10:04.425366  176813 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1213 00:10:04.425393  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:04.553907  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:05.643662  176813 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.089700044s)
	I1213 00:10:05.643704  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:01.230857  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:03.728598  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.729292  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.843723  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:07.844549  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:06.193695  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:08.194425  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:05.881077  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:05.983444  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:06.106543  176813 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:10:06.106637  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:06.120910  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:06.637294  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.137087  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.636989  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:10:07.659899  176813 api_server.go:72] duration metric: took 1.5533541s to wait for apiserver process to appear ...
	I1213 00:10:07.659925  176813 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:10:07.659949  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:08.229410  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.729881  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.344919  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.842700  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:10.692378  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.693810  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:12.660316  176813 api_server.go:269] stopped: https://192.168.39.70:8443/healthz: Get "https://192.168.39.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1213 00:10:12.660365  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:13.933418  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 00:10:13.933452  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 00:10:14.434114  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:14.442223  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1213 00:10:14.442261  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1213 00:10:14.934425  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:14.941188  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1213 00:10:14.941232  176813 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1213 00:10:15.433614  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:10:15.441583  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1213 00:10:15.449631  176813 api_server.go:141] control plane version: v1.16.0
	I1213 00:10:15.449656  176813 api_server.go:131] duration metric: took 7.789725712s to wait for apiserver health ...
	I1213 00:10:15.449671  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:10:15.449677  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:10:15.451328  176813 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:10:15.452690  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:10:15.463558  176813 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:10:15.482997  176813 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:10:15.493646  176813 system_pods.go:59] 7 kube-system pods found
	I1213 00:10:15.493674  176813 system_pods.go:61] "coredns-5644d7b6d9-jnhmk" [38a0c948-a47e-4566-ad47-376943787ca1] Running
	I1213 00:10:15.493679  176813 system_pods.go:61] "etcd-old-k8s-version-508612" [80e685b2-cd70-4b7d-b00c-feda3ab1a509] Running
	I1213 00:10:15.493683  176813 system_pods.go:61] "kube-apiserver-old-k8s-version-508612" [657f1d7b-4fcb-44d4-96d3-3cc659fb9f0a] Running
	I1213 00:10:15.493688  176813 system_pods.go:61] "kube-controller-manager-old-k8s-version-508612" [d84a0927-7d19-4bba-8afd-b32877a9aee3] Running
	I1213 00:10:15.493692  176813 system_pods.go:61] "kube-proxy-fpd4j" [f2e9e528-576f-4339-b208-09ee5dbe7fcb] Running
	I1213 00:10:15.493696  176813 system_pods.go:61] "kube-scheduler-old-k8s-version-508612" [ce5ff03a-23bf-4cce-8795-58e412fc841c] Running
	I1213 00:10:15.493699  176813 system_pods.go:61] "storage-provisioner" [98a03a45-0cd3-40b4-9e66-6df14db5a848] Running
	I1213 00:10:15.493706  176813 system_pods.go:74] duration metric: took 10.683423ms to wait for pod list to return data ...
	I1213 00:10:15.493715  176813 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:10:15.498679  176813 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:10:15.498726  176813 node_conditions.go:123] node cpu capacity is 2
	I1213 00:10:15.498742  176813 node_conditions.go:105] duration metric: took 5.021318ms to run NodePressure ...
	I1213 00:10:15.498767  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 00:10:15.762302  176813 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1213 00:10:15.766665  176813 retry.go:31] will retry after 288.591747ms: kubelet not initialised
	I1213 00:10:13.228878  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.728396  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.343194  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:17.344384  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:15.193995  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:17.693024  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:19.693723  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:16.063637  176813 retry.go:31] will retry after 250.40677ms: kubelet not initialised
	I1213 00:10:16.320362  176813 retry.go:31] will retry after 283.670967ms: kubelet not initialised
	I1213 00:10:16.610834  176813 retry.go:31] will retry after 810.845397ms: kubelet not initialised
	I1213 00:10:17.427101  176813 retry.go:31] will retry after 1.00058932s: kubelet not initialised
	I1213 00:10:18.498625  176813 retry.go:31] will retry after 2.616819597s: kubelet not initialised
	I1213 00:10:18.226990  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:20.228211  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:19.345330  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:21.843959  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:22.192449  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.193001  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:21.120283  176813 retry.go:31] will retry after 1.883694522s: kubelet not initialised
	I1213 00:10:23.009312  176813 retry.go:31] will retry after 2.899361823s: kubelet not initialised
	I1213 00:10:22.727450  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.729952  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:24.342673  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:26.343639  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:28.842489  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:26.696279  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:29.194453  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:25.914801  176813 retry.go:31] will retry after 8.466541404s: kubelet not initialised
	I1213 00:10:27.227947  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:29.229430  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:30.843429  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:32.844457  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:31.692122  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:33.694437  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:34.391931  176813 retry.go:31] will retry after 6.686889894s: kubelet not initialised
	I1213 00:10:31.729052  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:33.730399  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:35.344029  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:37.842694  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:36.193427  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:38.193688  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:36.226978  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:38.227307  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.227797  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.343702  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:42.841574  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:40.693443  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:42.693668  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:41.084957  176813 retry.go:31] will retry after 18.68453817s: kubelet not initialised
	I1213 00:10:42.229433  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:44.728322  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:44.843586  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:46.844269  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:45.192582  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:47.691806  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.692545  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:47.227469  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.228908  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:49.343743  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.843948  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.694308  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.192629  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:51.728175  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.226904  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:54.342077  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:56.343115  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.345031  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:56.193137  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.693873  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:59.777116  176813 kubeadm.go:787] kubelet initialised
	I1213 00:10:59.777150  176813 kubeadm.go:788] duration metric: took 44.014819539s waiting for restarted kubelet to initialise ...
	I1213 00:10:59.777162  176813 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:10:59.782802  176813 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.788307  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.788348  176813 pod_ready.go:81] duration metric: took 5.514049ms waiting for pod "coredns-5644d7b6d9-jnhmk" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.788356  176813 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.792569  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.792588  176813 pod_ready.go:81] duration metric: took 4.224934ms waiting for pod "coredns-5644d7b6d9-xsbd5" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.792599  176813 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.797096  176813 pod_ready.go:92] pod "etcd-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.797119  176813 pod_ready.go:81] duration metric: took 4.508662ms waiting for pod "etcd-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.797130  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.801790  176813 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:10:59.801811  176813 pod_ready.go:81] duration metric: took 4.673597ms waiting for pod "kube-apiserver-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:59.801818  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.175474  176813 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.175504  176813 pod_ready.go:81] duration metric: took 373.677737ms waiting for pod "kube-controller-manager-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.175523  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fpd4j" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.576344  176813 pod_ready.go:92] pod "kube-proxy-fpd4j" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.576373  176813 pod_ready.go:81] duration metric: took 400.842191ms waiting for pod "kube-proxy-fpd4j" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.576387  176813 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:10:56.229570  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:10:58.728770  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:00.843201  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.343182  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:01.199677  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.201427  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:00.976886  176813 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace has status "Ready":"True"
	I1213 00:11:00.976908  176813 pod_ready.go:81] duration metric: took 400.512629ms waiting for pod "kube-scheduler-old-k8s-version-508612" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:00.976920  176813 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace to be "Ready" ...
	I1213 00:11:03.283224  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.284030  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:01.229393  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:03.728570  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.843264  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.343228  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:05.694505  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.197100  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:07.285680  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:09.786591  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:06.227705  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:08.229577  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.727791  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.343300  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.843162  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:10.695161  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:13.195051  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.285865  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:14.785354  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:12.728656  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:15.227890  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:14.844312  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:16.847144  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:15.692597  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:18.193383  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:17.284986  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.786139  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:17.229608  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.728503  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:19.344056  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:21.843070  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:23.844051  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:20.692417  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.692912  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.693204  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.285292  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.784342  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:22.227286  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:24.228831  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.342758  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:28.347392  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.693376  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:28.696971  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:27.284643  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:29.284776  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:26.727796  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:29.227690  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:30.843482  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:32.844695  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.191962  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.192585  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.285494  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.285863  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.791234  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:31.727767  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:33.728047  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.342092  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:37.342356  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:35.196354  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:37.693679  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:38.285349  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.785094  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:36.228379  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:38.728361  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.728752  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:39.342944  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:41.343229  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:43.842669  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:40.192636  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:42.696348  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:43.284960  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.783972  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:42.730357  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.228371  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.844034  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:48.345622  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:45.199304  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.692399  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.692916  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.784062  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.784533  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:47.232607  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:49.727709  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:50.842207  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:52.845393  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:52.193829  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:54.694220  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:51.784671  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:54.284709  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:51.728053  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:53.729081  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:55.342783  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:57.343274  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.694508  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:59.194904  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.285342  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:58.783460  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:56.227395  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:58.231694  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:00.727822  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:11:59.343618  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.842326  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.842653  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.197290  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.694223  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:01.285393  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:03.784968  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.786110  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:02.728596  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.227456  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.843038  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.342838  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:05.695124  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.192630  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:08.284460  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.284768  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:07.728787  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.227036  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.344532  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.841921  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:10.193483  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.196550  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.693706  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.784036  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.784471  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:12.227952  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.228178  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:14.842965  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:17.343683  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:17.193131  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.692561  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:16.785596  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.285058  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:16.726702  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:18.728269  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:19.843031  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:22.343417  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:22.191869  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:24.193973  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:21.783890  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:23.784341  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:25.784521  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:21.227269  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:23.227691  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:25.228239  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:24.343805  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:26.346354  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:28.844254  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:26.693293  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:29.193583  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:27.784904  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:30.285014  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:27.727045  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:29.728691  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:31.346007  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:33.843421  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:31.194160  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:33.691639  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:32.784701  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:35.284958  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:32.226511  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:34.228892  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:36.342384  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.343546  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:35.694257  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.191620  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:37.286143  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:39.783802  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:36.727306  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:38.728168  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:40.850557  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.342393  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:40.192328  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:42.192749  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:44.693406  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:41.784411  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.789293  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:41.228591  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:43.728133  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:45.842401  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:47.843839  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:47.193847  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:49.692840  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:46.284387  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:48.284692  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:50.285419  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:46.228594  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:48.728575  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:50.343073  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:52.843034  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:51.692895  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.196344  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:52.785093  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.785238  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:51.226704  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:53.228359  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:55.228418  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:54.847060  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.345339  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:56.693854  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.191098  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.285101  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.783955  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:57.727063  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.727437  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:12:59.847179  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:02.343433  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.192388  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.693056  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.784055  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.784840  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:01.727635  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:03.727705  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:04.346684  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.843294  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.192928  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.693240  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.284092  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.784303  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:10.784971  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:06.228019  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:08.727726  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:09.342622  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:11.343211  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.843894  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:10.698298  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.191387  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.285854  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:15.790625  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:11.228300  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:13.730143  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:16.343574  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:18.343896  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:15.195797  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:17.694620  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:18.283712  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:20.284937  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:16.227280  177122 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:17.419163  177122 pod_ready.go:81] duration metric: took 4m0.000090271s waiting for pod "metrics-server-57f55c9bc5-fx5pd" in "kube-system" namespace to be "Ready" ...
	E1213 00:13:17.419207  177122 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:13:17.419233  177122 pod_ready.go:38] duration metric: took 4m12.64031929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:17.419260  177122 kubeadm.go:640] restartCluster took 4m32.91279931s
	W1213 00:13:17.419346  177122 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:13:17.419387  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:13:20.847802  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:23.342501  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:20.193039  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:22.693730  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:22.285212  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:24.783901  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:25.343029  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:27.842840  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:25.194640  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:27.692515  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:29.695543  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:26.785503  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:29.284618  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:33.603614  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.184189808s)
	I1213 00:13:33.603692  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:13:33.617573  177122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:13:33.626779  177122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:13:33.636160  177122 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:13:33.636214  177122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 00:13:33.694141  177122 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1213 00:13:33.694267  177122 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:13:33.853582  177122 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:13:33.853718  177122 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:13:33.853992  177122 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:13:34.092007  177122 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:13:29.844324  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:32.345926  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.093975  177122 out.go:204]   - Generating certificates and keys ...
	I1213 00:13:34.094125  177122 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:13:34.094198  177122 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:13:34.094297  177122 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:13:34.094492  177122 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:13:34.095287  177122 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:13:34.096041  177122 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:13:34.096841  177122 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:13:34.097551  177122 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:13:34.098399  177122 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:13:34.099122  177122 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:13:34.099844  177122 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:13:34.099929  177122 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:13:34.191305  177122 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:13:34.425778  177122 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:13:34.601958  177122 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:13:34.747536  177122 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:13:34.748230  177122 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:13:34.750840  177122 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:13:32.193239  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.691928  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:31.286291  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:33.786852  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:34.752409  177122 out.go:204]   - Booting up control plane ...
	I1213 00:13:34.752562  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:13:34.752659  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:13:34.752994  177122 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:13:34.772157  177122 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:13:34.774789  177122 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:13:34.774854  177122 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1213 00:13:34.926546  177122 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:13:34.346782  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.847723  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.694243  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:39.195903  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:36.284979  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:38.285685  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:40.286174  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:39.345989  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:41.353093  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:43.847024  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:43.435528  177122 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506764 seconds
	I1213 00:13:43.435691  177122 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:13:43.454840  177122 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:13:43.997250  177122 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:13:43.997537  177122 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-335807 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 00:13:44.513097  177122 kubeadm.go:322] [bootstrap-token] Using token: a9yhsz.n5p4z1j5jkbj68ov
	I1213 00:13:44.514695  177122 out.go:204]   - Configuring RBAC rules ...
	I1213 00:13:44.514836  177122 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:13:44.520134  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 00:13:44.528726  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:13:44.535029  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:13:44.539162  177122 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:13:44.545990  177122 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:13:44.561964  177122 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 00:13:44.831402  177122 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:13:44.927500  177122 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:13:44.931294  177122 kubeadm.go:322] 
	I1213 00:13:44.931371  177122 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:13:44.931389  177122 kubeadm.go:322] 
	I1213 00:13:44.931500  177122 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:13:44.931509  177122 kubeadm.go:322] 
	I1213 00:13:44.931535  177122 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:13:44.931605  177122 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:13:44.931674  177122 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:13:44.931681  177122 kubeadm.go:322] 
	I1213 00:13:44.931743  177122 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1213 00:13:44.931752  177122 kubeadm.go:322] 
	I1213 00:13:44.931838  177122 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 00:13:44.931861  177122 kubeadm.go:322] 
	I1213 00:13:44.931938  177122 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:13:44.932026  177122 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:13:44.932139  177122 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:13:44.932151  177122 kubeadm.go:322] 
	I1213 00:13:44.932260  177122 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 00:13:44.932367  177122 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:13:44.932386  177122 kubeadm.go:322] 
	I1213 00:13:44.932533  177122 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token a9yhsz.n5p4z1j5jkbj68ov \
	I1213 00:13:44.932702  177122 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:13:44.932726  177122 kubeadm.go:322] 	--control-plane 
	I1213 00:13:44.932730  177122 kubeadm.go:322] 
	I1213 00:13:44.932797  177122 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:13:44.932808  177122 kubeadm.go:322] 
	I1213 00:13:44.932927  177122 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token a9yhsz.n5p4z1j5jkbj68ov \
	I1213 00:13:44.933074  177122 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:13:44.933953  177122 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:13:44.934004  177122 cni.go:84] Creating CNI manager for ""
	I1213 00:13:44.934026  177122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:13:44.935893  177122 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:13:41.694337  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.192303  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:42.783865  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.784599  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:44.937355  177122 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:13:44.961248  177122 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:13:45.005684  177122 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:13:45.005758  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.005789  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=embed-certs-335807 minikube.k8s.io/updated_at=2023_12_13T00_13_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.117205  177122 ops.go:34] apiserver oom_adj: -16
	I1213 00:13:45.402961  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:45.532503  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:46.343927  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:48.843509  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.197988  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:48.691611  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.785080  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:49.283316  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:46.138647  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:46.639104  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:47.139139  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:47.638244  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:48.138634  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:48.638352  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:49.138616  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:49.639061  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:50.138633  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:50.639013  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:51.343525  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:53.345044  177307 pod_ready.go:102] pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:50.693254  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:52.693448  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:51.286352  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:53.782966  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:55.786792  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:51.138430  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:51.638340  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:52.138696  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:52.638727  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:53.138509  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:53.639092  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:54.138153  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:54.638781  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:55.138875  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:55.639166  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:56.138534  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:56.638726  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:57.138427  177122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:13:57.273101  177122 kubeadm.go:1088] duration metric: took 12.26741009s to wait for elevateKubeSystemPrivileges.
	I1213 00:13:57.273139  177122 kubeadm.go:406] StartCluster complete in 5m12.825293837s
	I1213 00:13:57.273163  177122 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:13:57.273294  177122 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:13:57.275845  177122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:13:57.276142  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:13:57.276488  177122 config.go:182] Loaded profile config "embed-certs-335807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1213 00:13:57.276665  177122 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:13:57.276739  177122 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-335807"
	I1213 00:13:57.276756  177122 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-335807"
	W1213 00:13:57.276765  177122 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:13:57.276812  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.277245  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277283  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.277356  177122 addons.go:69] Setting default-storageclass=true in profile "embed-certs-335807"
	I1213 00:13:57.277374  177122 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-335807"
	I1213 00:13:57.277528  177122 addons.go:69] Setting metrics-server=true in profile "embed-certs-335807"
	I1213 00:13:57.277545  177122 addons.go:231] Setting addon metrics-server=true in "embed-certs-335807"
	W1213 00:13:57.277552  177122 addons.go:240] addon metrics-server should already be in state true
	I1213 00:13:57.277599  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.277791  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277820  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.277923  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.277945  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.296571  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I1213 00:13:57.299879  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I1213 00:13:57.299897  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I1213 00:13:57.300251  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.300833  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.300906  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.300923  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.300935  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.301294  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.301309  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.301330  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.301419  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.301427  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.301497  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.301728  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.301774  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.302199  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.302232  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.303181  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.303222  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.304586  177122 addons.go:231] Setting addon default-storageclass=true in "embed-certs-335807"
	W1213 00:13:57.304601  177122 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:13:57.304620  177122 host.go:66] Checking if "embed-certs-335807" exists ...
	I1213 00:13:57.304860  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.304891  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.323403  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I1213 00:13:57.324103  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.324810  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I1213 00:13:57.324961  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.324985  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.325197  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.325332  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.325518  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.325910  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.325935  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.326524  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.326731  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.328013  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.329895  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.332188  177122 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:13:57.333332  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1213 00:13:57.333375  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:13:57.334952  177122 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:13:57.333392  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:13:57.333795  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.337096  177122 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:13:57.337110  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:13:57.337124  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.337162  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.337564  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.337585  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.339793  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.340514  177122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:13:57.340572  177122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:13:57.340821  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.341606  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.341657  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.341829  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.342023  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.342206  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.342411  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.347105  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.347512  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.347538  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.347782  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.347974  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.348108  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.348213  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.359690  177122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1213 00:13:57.360385  177122 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:13:57.361065  177122 main.go:141] libmachine: Using API Version  1
	I1213 00:13:57.361093  177122 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:13:57.361567  177122 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:13:57.361777  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetState
	I1213 00:13:57.363693  177122 main.go:141] libmachine: (embed-certs-335807) Calling .DriverName
	I1213 00:13:57.364020  177122 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:13:57.364037  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:13:57.364056  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHHostname
	I1213 00:13:57.367409  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.367874  177122 main.go:141] libmachine: (embed-certs-335807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:1b:c0", ip: ""} in network mk-embed-certs-335807: {Iface:virbr4 ExpiryTime:2023-12-13 01:08:25 +0000 UTC Type:0 Mac:52:54:00:20:1b:c0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:embed-certs-335807 Clientid:01:52:54:00:20:1b:c0}
	I1213 00:13:57.367904  177122 main.go:141] libmachine: (embed-certs-335807) DBG | domain embed-certs-335807 has defined IP address 192.168.61.249 and MAC address 52:54:00:20:1b:c0 in network mk-embed-certs-335807
	I1213 00:13:57.368086  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHPort
	I1213 00:13:57.368287  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHKeyPath
	I1213 00:13:57.368470  177122 main.go:141] libmachine: (embed-certs-335807) Calling .GetSSHUsername
	I1213 00:13:57.368619  177122 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/embed-certs-335807/id_rsa Username:docker}
	I1213 00:13:57.399353  177122 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-335807" context rescaled to 1 replicas
	I1213 00:13:57.399391  177122 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.249 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:13:57.401371  177122 out.go:177] * Verifying Kubernetes components...
	I1213 00:13:54.829811  177307 pod_ready.go:81] duration metric: took 4m0.000140793s waiting for pod "metrics-server-57f55c9bc5-px5lm" in "kube-system" namespace to be "Ready" ...
	E1213 00:13:54.829844  177307 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:13:54.829878  177307 pod_ready.go:38] duration metric: took 4m13.138964255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:54.829912  177307 kubeadm.go:640] restartCluster took 4m33.090839538s
	W1213 00:13:54.829977  177307 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:13:54.830014  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:13:55.192745  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:57.193249  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:59.196279  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:13:57.403699  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:13:57.551632  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:13:57.551656  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:13:57.590132  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:13:57.617477  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:13:57.648290  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:13:57.648324  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:13:57.724394  177122 node_ready.go:35] waiting up to 6m0s for node "embed-certs-335807" to be "Ready" ...
	I1213 00:13:57.724498  177122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:13:57.751666  177122 node_ready.go:49] node "embed-certs-335807" has status "Ready":"True"
	I1213 00:13:57.751704  177122 node_ready.go:38] duration metric: took 27.274531ms waiting for node "embed-certs-335807" to be "Ready" ...
	I1213 00:13:57.751718  177122 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:13:57.764283  177122 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace to be "Ready" ...
	I1213 00:13:57.835941  177122 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:13:57.835968  177122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:13:58.040994  177122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:13:59.867561  177122 pod_ready.go:102] pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.210713  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.620538044s)
	I1213 00:14:00.210745  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.593229432s)
	I1213 00:14:00.210763  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210775  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210805  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210846  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210892  177122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.169863052s)
	I1213 00:14:00.210932  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.210951  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.210803  177122 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.48627637s)
	I1213 00:14:00.211241  177122 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1213 00:14:00.211428  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.211467  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.211477  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.211486  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.211496  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.211804  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.211843  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.211851  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.211860  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.211869  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.211979  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.212025  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.212033  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.212251  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.213205  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213214  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.213221  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213253  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213269  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213287  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.213300  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.213565  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.213592  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.213600  177122 addons.go:467] Verifying addon metrics-server=true in "embed-certs-335807"
	I1213 00:14:00.213633  177122 main.go:141] libmachine: (embed-certs-335807) DBG | Closing plugin on server side
	I1213 00:14:00.231892  177122 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:00.231921  177122 main.go:141] libmachine: (embed-certs-335807) Calling .Close
	I1213 00:14:00.232238  177122 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:00.232257  177122 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:00.234089  177122 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1213 00:13:58.285584  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.286469  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:00.235676  177122 addons.go:502] enable addons completed in 2.959016059s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1213 00:14:01.848071  177122 pod_ready.go:92] pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.848093  177122 pod_ready.go:81] duration metric: took 4.083780035s waiting for pod "coredns-5dd5756b68-gs4kb" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.848101  177122 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.854062  177122 pod_ready.go:92] pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.854082  177122 pod_ready.go:81] duration metric: took 5.975194ms waiting for pod "coredns-5dd5756b68-t92hd" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.854090  177122 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.864033  177122 pod_ready.go:92] pod "etcd-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.864060  177122 pod_ready.go:81] duration metric: took 9.963384ms waiting for pod "etcd-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.864072  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.875960  177122 pod_ready.go:92] pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.875990  177122 pod_ready.go:81] duration metric: took 11.909604ms waiting for pod "kube-apiserver-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.876004  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.882084  177122 pod_ready.go:92] pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:01.882107  177122 pod_ready.go:81] duration metric: took 6.092978ms waiting for pod "kube-controller-manager-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:01.882118  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ccq47" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:02.645363  177122 pod_ready.go:92] pod "kube-proxy-ccq47" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:02.645389  177122 pod_ready.go:81] duration metric: took 763.264171ms waiting for pod "kube-proxy-ccq47" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:02.645399  177122 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:03.045476  177122 pod_ready.go:92] pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:03.045502  177122 pod_ready.go:81] duration metric: took 400.097321ms waiting for pod "kube-scheduler-embed-certs-335807" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:03.045513  177122 pod_ready.go:38] duration metric: took 5.293782674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:03.045530  177122 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:03.045584  177122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:03.062802  177122 api_server.go:72] duration metric: took 5.663381439s to wait for apiserver process to appear ...
	I1213 00:14:03.062827  177122 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:03.062848  177122 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8443/healthz ...
	I1213 00:14:03.068482  177122 api_server.go:279] https://192.168.61.249:8443/healthz returned 200:
	ok
	I1213 00:14:03.069909  177122 api_server.go:141] control plane version: v1.28.4
	I1213 00:14:03.069934  177122 api_server.go:131] duration metric: took 7.099309ms to wait for apiserver health ...
	I1213 00:14:03.069943  177122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:03.248993  177122 system_pods.go:59] 9 kube-system pods found
	I1213 00:14:03.249025  177122 system_pods.go:61] "coredns-5dd5756b68-gs4kb" [d4b86e83-a0a1-4bf8-958e-e154e91f47ef] Running
	I1213 00:14:03.249032  177122 system_pods.go:61] "coredns-5dd5756b68-t92hd" [1ad2dcb3-bcda-42af-b4ce-0d95bba0315f] Running
	I1213 00:14:03.249039  177122 system_pods.go:61] "etcd-embed-certs-335807" [aa5222a7-5670-4550-9d65-6db2095898be] Running
	I1213 00:14:03.249045  177122 system_pods.go:61] "kube-apiserver-embed-certs-335807" [ca0e9de9-8f6a-4bae-b1d1-04b7c0c3cd4c] Running
	I1213 00:14:03.249052  177122 system_pods.go:61] "kube-controller-manager-embed-certs-335807" [f5563afe-3d6c-4b44-b0c0-765da451fd88] Running
	I1213 00:14:03.249057  177122 system_pods.go:61] "kube-proxy-ccq47" [68f3c55f-175e-40af-a769-65c859d5012d] Running
	I1213 00:14:03.249063  177122 system_pods.go:61] "kube-scheduler-embed-certs-335807" [c989cf08-80e9-4b0f-b0e4-f840c6259ace] Running
	I1213 00:14:03.249074  177122 system_pods.go:61] "metrics-server-57f55c9bc5-z7qb4" [b33959c3-63b7-4a81-adda-6d2971036e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:03.249082  177122 system_pods.go:61] "storage-provisioner" [816660d7-a041-4695-b7da-d977b8891935] Running
	I1213 00:14:03.249095  177122 system_pods.go:74] duration metric: took 179.144496ms to wait for pod list to return data ...
	I1213 00:14:03.249106  177122 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:03.444557  177122 default_sa.go:45] found service account: "default"
	I1213 00:14:03.444591  177122 default_sa.go:55] duration metric: took 195.469108ms for default service account to be created ...
	I1213 00:14:03.444603  177122 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:03.651685  177122 system_pods.go:86] 9 kube-system pods found
	I1213 00:14:03.651714  177122 system_pods.go:89] "coredns-5dd5756b68-gs4kb" [d4b86e83-a0a1-4bf8-958e-e154e91f47ef] Running
	I1213 00:14:03.651719  177122 system_pods.go:89] "coredns-5dd5756b68-t92hd" [1ad2dcb3-bcda-42af-b4ce-0d95bba0315f] Running
	I1213 00:14:03.651723  177122 system_pods.go:89] "etcd-embed-certs-335807" [aa5222a7-5670-4550-9d65-6db2095898be] Running
	I1213 00:14:03.651727  177122 system_pods.go:89] "kube-apiserver-embed-certs-335807" [ca0e9de9-8f6a-4bae-b1d1-04b7c0c3cd4c] Running
	I1213 00:14:03.651731  177122 system_pods.go:89] "kube-controller-manager-embed-certs-335807" [f5563afe-3d6c-4b44-b0c0-765da451fd88] Running
	I1213 00:14:03.651735  177122 system_pods.go:89] "kube-proxy-ccq47" [68f3c55f-175e-40af-a769-65c859d5012d] Running
	I1213 00:14:03.651739  177122 system_pods.go:89] "kube-scheduler-embed-certs-335807" [c989cf08-80e9-4b0f-b0e4-f840c6259ace] Running
	I1213 00:14:03.651745  177122 system_pods.go:89] "metrics-server-57f55c9bc5-z7qb4" [b33959c3-63b7-4a81-adda-6d2971036e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:03.651750  177122 system_pods.go:89] "storage-provisioner" [816660d7-a041-4695-b7da-d977b8891935] Running
	I1213 00:14:03.651758  177122 system_pods.go:126] duration metric: took 207.148805ms to wait for k8s-apps to be running ...
	I1213 00:14:03.651764  177122 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:03.651814  177122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:03.666068  177122 system_svc.go:56] duration metric: took 14.292973ms WaitForService to wait for kubelet.
	I1213 00:14:03.666093  177122 kubeadm.go:581] duration metric: took 6.266680553s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:03.666109  177122 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:03.845399  177122 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:03.845431  177122 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:03.845447  177122 node_conditions.go:105] duration metric: took 179.332019ms to run NodePressure ...
	I1213 00:14:03.845462  177122 start.go:228] waiting for startup goroutines ...
	I1213 00:14:03.845470  177122 start.go:233] waiting for cluster config update ...
	I1213 00:14:03.845482  177122 start.go:242] writing updated cluster config ...
	I1213 00:14:03.845850  177122 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:03.898374  177122 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1213 00:14:03.900465  177122 out.go:177] * Done! kubectl is now configured to use "embed-certs-335807" cluster and "default" namespace by default
	I1213 00:14:01.693061  177409 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:01.886947  177409 pod_ready.go:81] duration metric: took 4m0.000066225s waiting for pod "metrics-server-57f55c9bc5-6q9jg" in "kube-system" namespace to be "Ready" ...
	E1213 00:14:01.886997  177409 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:14:01.887010  177409 pod_ready.go:38] duration metric: took 4m3.203360525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:01.887056  177409 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:01.887093  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:01.887156  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:01.956004  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:01.956029  177409 cri.go:89] found id: ""
	I1213 00:14:01.956038  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:01.956096  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:01.961314  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:01.961388  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:02.001797  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:02.001825  177409 cri.go:89] found id: ""
	I1213 00:14:02.001835  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:02.001881  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.007127  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:02.007193  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:02.050259  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:02.050283  177409 cri.go:89] found id: ""
	I1213 00:14:02.050294  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:02.050347  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.056086  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:02.056147  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:02.125159  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:02.125189  177409 cri.go:89] found id: ""
	I1213 00:14:02.125199  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:02.125261  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.129874  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:02.129939  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:02.175027  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:02.175058  177409 cri.go:89] found id: ""
	I1213 00:14:02.175067  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:02.175127  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.180444  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:02.180515  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:02.219578  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:02.219603  177409 cri.go:89] found id: ""
	I1213 00:14:02.219610  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:02.219664  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.223644  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:02.223693  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:02.260542  177409 cri.go:89] found id: ""
	I1213 00:14:02.260567  177409 logs.go:284] 0 containers: []
	W1213 00:14:02.260575  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:02.260583  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:02.260656  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:02.304058  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:02.304082  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:02.304090  177409 cri.go:89] found id: ""
	I1213 00:14:02.304100  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:02.304159  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.308606  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:02.312421  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:02.312473  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:02.356415  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:02.356460  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:02.405870  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:02.405902  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:02.876461  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:02.876508  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:03.037302  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:03.037334  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:03.098244  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:03.098273  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:03.163681  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:03.163712  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:03.216883  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:03.216912  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:03.267979  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:03.268011  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:03.309364  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:03.309394  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:03.352427  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:03.352479  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:03.406508  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:03.406547  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:03.449959  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:03.449985  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:02.784516  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:05.284536  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.408895  177307 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.578851358s)
	I1213 00:14:09.408954  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:09.422044  177307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:14:09.430579  177307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:14:09.438689  177307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:14:09.438727  177307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 00:14:09.493519  177307 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1213 00:14:09.493657  177307 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:14:09.648151  177307 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:14:09.648294  177307 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:14:09.648489  177307 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:14:09.908199  177307 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:14:05.974125  177409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:05.992335  177409 api_server.go:72] duration metric: took 4m12.842684139s to wait for apiserver process to appear ...
	I1213 00:14:05.992364  177409 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:05.992411  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:05.992491  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:06.037770  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:06.037796  177409 cri.go:89] found id: ""
	I1213 00:14:06.037805  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:06.037863  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.042949  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:06.043016  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:06.090863  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:06.090888  177409 cri.go:89] found id: ""
	I1213 00:14:06.090897  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:06.090951  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.103859  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:06.103925  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:06.156957  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:06.156982  177409 cri.go:89] found id: ""
	I1213 00:14:06.156992  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:06.157053  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.162170  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:06.162220  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:06.204839  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:06.204867  177409 cri.go:89] found id: ""
	I1213 00:14:06.204877  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:06.204942  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.210221  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:06.210287  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:06.255881  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:06.255909  177409 cri.go:89] found id: ""
	I1213 00:14:06.255918  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:06.255984  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.260853  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:06.260924  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:06.308377  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:06.308400  177409 cri.go:89] found id: ""
	I1213 00:14:06.308413  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:06.308493  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.315028  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:06.315111  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:06.365453  177409 cri.go:89] found id: ""
	I1213 00:14:06.365484  177409 logs.go:284] 0 containers: []
	W1213 00:14:06.365494  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:06.365507  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:06.365568  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:06.423520  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:06.423545  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:06.423560  177409 cri.go:89] found id: ""
	I1213 00:14:06.423571  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:06.423628  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.429613  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:06.434283  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:06.434310  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:06.571329  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:06.571375  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:06.613274  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:06.613307  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:06.673407  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:06.673455  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:06.688886  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:06.688933  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:06.733130  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:06.733162  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:06.780131  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:06.780161  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:06.827465  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:06.827500  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:06.880245  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:06.880286  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:06.919735  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:06.919764  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:06.974039  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:06.974074  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:07.400452  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:07.400491  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:07.456759  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:07.456789  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.010686  177409 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I1213 00:14:10.017803  177409 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I1213 00:14:10.019196  177409 api_server.go:141] control plane version: v1.28.4
	I1213 00:14:10.019216  177409 api_server.go:131] duration metric: took 4.026844615s to wait for apiserver health ...
	I1213 00:14:10.019225  177409 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:10.019251  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 00:14:10.019303  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 00:14:07.784301  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.785226  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:09.910151  177307 out.go:204]   - Generating certificates and keys ...
	I1213 00:14:09.910259  177307 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:14:09.910339  177307 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:14:09.910444  177307 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:14:09.910527  177307 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:14:09.910616  177307 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:14:09.910662  177307 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:14:09.910713  177307 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:14:09.910791  177307 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:14:09.910892  177307 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:14:09.911041  177307 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:14:09.911107  177307 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:14:09.911186  177307 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:14:10.262533  177307 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:14:10.508123  177307 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 00:14:10.766822  177307 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:14:10.866565  177307 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:14:11.206659  177307 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:14:11.207238  177307 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:14:11.210018  177307 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:14:10.061672  177409 cri.go:89] found id: "c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.061699  177409 cri.go:89] found id: ""
	I1213 00:14:10.061708  177409 logs.go:284] 1 containers: [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1]
	I1213 00:14:10.061769  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.066426  177409 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 00:14:10.066491  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 00:14:10.107949  177409 cri.go:89] found id: "fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:10.107978  177409 cri.go:89] found id: ""
	I1213 00:14:10.107994  177409 logs.go:284] 1 containers: [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7]
	I1213 00:14:10.108053  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.112321  177409 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 00:14:10.112393  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 00:14:10.169082  177409 cri.go:89] found id: "125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:10.169110  177409 cri.go:89] found id: ""
	I1213 00:14:10.169120  177409 logs.go:284] 1 containers: [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad]
	I1213 00:14:10.169175  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.174172  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 00:14:10.174225  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 00:14:10.220290  177409 cri.go:89] found id: "c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:10.220313  177409 cri.go:89] found id: ""
	I1213 00:14:10.220326  177409 logs.go:284] 1 containers: [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee]
	I1213 00:14:10.220384  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.225241  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 00:14:10.225310  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 00:14:10.271312  177409 cri.go:89] found id: "545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:10.271336  177409 cri.go:89] found id: ""
	I1213 00:14:10.271345  177409 logs.go:284] 1 containers: [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41]
	I1213 00:14:10.271401  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.275974  177409 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 00:14:10.276049  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 00:14:10.324262  177409 cri.go:89] found id: "57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:10.324288  177409 cri.go:89] found id: ""
	I1213 00:14:10.324299  177409 logs.go:284] 1 containers: [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673]
	I1213 00:14:10.324360  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.329065  177409 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 00:14:10.329130  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 00:14:10.375611  177409 cri.go:89] found id: ""
	I1213 00:14:10.375640  177409 logs.go:284] 0 containers: []
	W1213 00:14:10.375648  177409 logs.go:286] No container was found matching "kindnet"
	I1213 00:14:10.375654  177409 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 00:14:10.375725  177409 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 00:14:10.420778  177409 cri.go:89] found id: "c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:10.420807  177409 cri.go:89] found id: "705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:10.420812  177409 cri.go:89] found id: ""
	I1213 00:14:10.420819  177409 logs.go:284] 2 containers: [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a]
	I1213 00:14:10.420866  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.425676  177409 ssh_runner.go:195] Run: which crictl
	I1213 00:14:10.430150  177409 logs.go:123] Gathering logs for kubelet ...
	I1213 00:14:10.430180  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 00:14:10.486314  177409 logs.go:123] Gathering logs for dmesg ...
	I1213 00:14:10.486351  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 00:14:10.500915  177409 logs.go:123] Gathering logs for kube-proxy [545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41] ...
	I1213 00:14:10.500946  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 545581d8fb2dd528df10e6fc1bf93153d753deabfed5c8ae6c4de41f207abb41"
	I1213 00:14:10.543073  177409 logs.go:123] Gathering logs for storage-provisioner [c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974] ...
	I1213 00:14:10.543108  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290417afdb45ded042fb82bc90a60b4f001776d358a12aefc456be712d37974"
	I1213 00:14:10.584779  177409 logs.go:123] Gathering logs for storage-provisioner [705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a] ...
	I1213 00:14:10.584814  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705b27e3bd760875bff70aa94df75264c02ad2b58f79a89c7113292114db2d8a"
	I1213 00:14:10.629824  177409 logs.go:123] Gathering logs for describe nodes ...
	I1213 00:14:10.629852  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 00:14:10.756816  177409 logs.go:123] Gathering logs for kube-apiserver [c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1] ...
	I1213 00:14:10.756857  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c918252a292323d2f36511e4b96fdd3d645d8113dd882ed0f0569660414da1"
	I1213 00:14:10.807506  177409 logs.go:123] Gathering logs for coredns [125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad] ...
	I1213 00:14:10.807536  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 125252879d69a8dc78c276800ad49dd3cf100c7140ab40c99b672fba4f5674ad"
	I1213 00:14:10.849398  177409 logs.go:123] Gathering logs for kube-controller-manager [57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673] ...
	I1213 00:14:10.849436  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57e6249b6837d3686a86d16c050fb3e755cb793e04ea3eacf37bbb59ca488673"
	I1213 00:14:10.911470  177409 logs.go:123] Gathering logs for CRI-O ...
	I1213 00:14:10.911508  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 00:14:11.288892  177409 logs.go:123] Gathering logs for etcd [fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7] ...
	I1213 00:14:11.288941  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd8469f4d2e9866faef858c46fe615f3a55b98f29c0c4fe6dc9124f2fb57dad7"
	I1213 00:14:11.361299  177409 logs.go:123] Gathering logs for kube-scheduler [c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee] ...
	I1213 00:14:11.361347  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c94b9bf453ae333b2e0b9c51ef3856518a3124eec24f0a7cffe96e6459a086ee"
	I1213 00:14:11.407800  177409 logs.go:123] Gathering logs for container status ...
	I1213 00:14:11.407850  177409 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 00:14:13.965440  177409 system_pods.go:59] 8 kube-system pods found
	I1213 00:14:13.965477  177409 system_pods.go:61] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running
	I1213 00:14:13.965485  177409 system_pods.go:61] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running
	I1213 00:14:13.965493  177409 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running
	I1213 00:14:13.965500  177409 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running
	I1213 00:14:13.965505  177409 system_pods.go:61] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running
	I1213 00:14:13.965509  177409 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running
	I1213 00:14:13.965518  177409 system_pods.go:61] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:13.965528  177409 system_pods.go:61] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running
	I1213 00:14:13.965538  177409 system_pods.go:74] duration metric: took 3.946305195s to wait for pod list to return data ...
	I1213 00:14:13.965548  177409 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:13.969074  177409 default_sa.go:45] found service account: "default"
	I1213 00:14:13.969103  177409 default_sa.go:55] duration metric: took 3.543208ms for default service account to be created ...
	I1213 00:14:13.969114  177409 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:13.977167  177409 system_pods.go:86] 8 kube-system pods found
	I1213 00:14:13.977201  177409 system_pods.go:89] "coredns-5dd5756b68-ftv9l" [60d9730b-2e6b-4263-a70d-273cf6837f60] Running
	I1213 00:14:13.977211  177409 system_pods.go:89] "etcd-default-k8s-diff-port-743278" [4c78aadc-8213-4cd3-a365-e8d71d00b1ed] Running
	I1213 00:14:13.977219  177409 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-743278" [2590235b-fc24-41ed-8879-909cbba26d5c] Running
	I1213 00:14:13.977226  177409 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-743278" [58396a31-0a0d-4a3f-b658-e27f836affd1] Running
	I1213 00:14:13.977232  177409 system_pods.go:89] "kube-proxy-zk4wl" [e20fe8f7-0c1f-4be3-8184-cd3d6cc19a43] Running
	I1213 00:14:13.977238  177409 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-743278" [e4312815-a719-4ab5-b628-9958dd7ce658] Running
	I1213 00:14:13.977249  177409 system_pods.go:89] "metrics-server-57f55c9bc5-6q9jg" [b1849258-4fd1-43a5-b67b-02d8e44acd8b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:13.977257  177409 system_pods.go:89] "storage-provisioner" [d87ee16e-300f-4797-b0be-efc256d0e827] Running
	I1213 00:14:13.977272  177409 system_pods.go:126] duration metric: took 8.1502ms to wait for k8s-apps to be running ...
	I1213 00:14:13.977288  177409 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:13.977342  177409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:13.996304  177409 system_svc.go:56] duration metric: took 19.006856ms WaitForService to wait for kubelet.
	I1213 00:14:13.996340  177409 kubeadm.go:581] duration metric: took 4m20.846697962s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:13.996374  177409 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:14.000473  177409 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:14.000505  177409 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:14.000518  177409 node_conditions.go:105] duration metric: took 4.137212ms to run NodePressure ...
	I1213 00:14:14.000534  177409 start.go:228] waiting for startup goroutines ...
	I1213 00:14:14.000544  177409 start.go:233] waiting for cluster config update ...
	I1213 00:14:14.000561  177409 start.go:242] writing updated cluster config ...
	I1213 00:14:14.000901  177409 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:14.059785  177409 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1213 00:14:14.062155  177409 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-743278" cluster and "default" namespace by default
	I1213 00:14:11.212405  177307 out.go:204]   - Booting up control plane ...
	I1213 00:14:11.212538  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:14:11.213865  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:14:11.215312  177307 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:14:11.235356  177307 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:14:11.236645  177307 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:14:11.236755  177307 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1213 00:14:11.385788  177307 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:14:12.284994  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:14.784159  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:19.387966  177307 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002219 seconds
	I1213 00:14:19.402873  177307 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:14:19.424220  177307 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:14:19.954243  177307 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:14:19.954453  177307 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-143586 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 00:14:20.468986  177307 kubeadm.go:322] [bootstrap-token] Using token: nss44e.j85t1ilri9kvvn0e
	I1213 00:14:16.785364  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:19.284214  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:20.470732  177307 out.go:204]   - Configuring RBAC rules ...
	I1213 00:14:20.470866  177307 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:14:20.479490  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 00:14:20.488098  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:14:20.491874  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:14:20.496891  177307 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:14:20.506058  177307 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:14:20.523032  177307 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 00:14:20.796465  177307 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:14:20.892018  177307 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:14:20.892049  177307 kubeadm.go:322] 
	I1213 00:14:20.892159  177307 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:14:20.892185  177307 kubeadm.go:322] 
	I1213 00:14:20.892284  177307 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:14:20.892296  177307 kubeadm.go:322] 
	I1213 00:14:20.892338  177307 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:14:20.892421  177307 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:14:20.892512  177307 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:14:20.892529  177307 kubeadm.go:322] 
	I1213 00:14:20.892620  177307 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1213 00:14:20.892648  177307 kubeadm.go:322] 
	I1213 00:14:20.892734  177307 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 00:14:20.892745  177307 kubeadm.go:322] 
	I1213 00:14:20.892807  177307 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:14:20.892938  177307 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:14:20.893057  177307 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:14:20.893072  177307 kubeadm.go:322] 
	I1213 00:14:20.893182  177307 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 00:14:20.893286  177307 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:14:20.893307  177307 kubeadm.go:322] 
	I1213 00:14:20.893446  177307 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nss44e.j85t1ilri9kvvn0e \
	I1213 00:14:20.893588  177307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:14:20.893625  177307 kubeadm.go:322] 	--control-plane 
	I1213 00:14:20.893634  177307 kubeadm.go:322] 
	I1213 00:14:20.893740  177307 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:14:20.893752  177307 kubeadm.go:322] 
	I1213 00:14:20.893877  177307 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nss44e.j85t1ilri9kvvn0e \
	I1213 00:14:20.894017  177307 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:14:20.895217  177307 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:14:20.895249  177307 cni.go:84] Creating CNI manager for ""
	I1213 00:14:20.895261  177307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:14:20.897262  177307 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:14:20.898838  177307 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:14:20.933446  177307 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:14:20.985336  177307 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:14:20.985435  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:20.985458  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=no-preload-143586 minikube.k8s.io/updated_at=2023_12_13T00_14_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.062513  177307 ops.go:34] apiserver oom_adj: -16
	I1213 00:14:21.374568  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.482135  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:22.088971  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:22.588816  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:23.088960  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:23.588701  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:24.088568  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:21.783473  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:23.784019  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:25.785712  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:24.588803  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:25.088983  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:25.589097  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:26.088561  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:26.589160  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:27.088601  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:27.588337  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.088578  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.588533  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:29.088398  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:28.284015  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:30.285509  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:29.588587  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:30.088826  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:30.588871  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:31.089336  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:31.588959  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:32.088390  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:32.589079  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:33.088948  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:33.589067  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:34.089108  177307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:14:34.261304  177307 kubeadm.go:1088] duration metric: took 13.275930767s to wait for elevateKubeSystemPrivileges.
	I1213 00:14:34.261367  177307 kubeadm.go:406] StartCluster complete in 5m12.573209179s
	I1213 00:14:34.261392  177307 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:14:34.261511  177307 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:14:34.264237  177307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:14:34.264668  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:14:34.264951  177307 config.go:182] Loaded profile config "no-preload-143586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1213 00:14:34.265065  177307 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:14:34.265128  177307 addons.go:69] Setting storage-provisioner=true in profile "no-preload-143586"
	I1213 00:14:34.265150  177307 addons.go:231] Setting addon storage-provisioner=true in "no-preload-143586"
	W1213 00:14:34.265161  177307 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:14:34.265202  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.265231  177307 addons.go:69] Setting default-storageclass=true in profile "no-preload-143586"
	I1213 00:14:34.265262  177307 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-143586"
	I1213 00:14:34.265606  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.265612  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.265627  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.265628  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.265846  177307 addons.go:69] Setting metrics-server=true in profile "no-preload-143586"
	I1213 00:14:34.265878  177307 addons.go:231] Setting addon metrics-server=true in "no-preload-143586"
	W1213 00:14:34.265890  177307 addons.go:240] addon metrics-server should already be in state true
	I1213 00:14:34.265935  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.266231  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.266277  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.287844  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34161
	I1213 00:14:34.287882  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I1213 00:14:34.287968  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
	I1213 00:14:34.288509  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.288529  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.288811  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.289178  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289197  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289310  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289325  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289335  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.289347  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.289707  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289713  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289736  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.289891  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.290392  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.290398  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.290415  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.290417  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.293696  177307 addons.go:231] Setting addon default-storageclass=true in "no-preload-143586"
	W1213 00:14:34.293725  177307 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:14:34.293756  177307 host.go:66] Checking if "no-preload-143586" exists ...
	I1213 00:14:34.294150  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.294187  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.309103  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I1213 00:14:34.309683  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.310362  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.310387  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.310830  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.311091  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.312755  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37041
	I1213 00:14:34.313192  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.313601  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.313796  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.313814  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.316496  177307 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:14:34.314223  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.316102  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41945
	I1213 00:14:34.318112  177307 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:14:34.318127  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:14:34.318144  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.318260  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.318670  177307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:14:34.318693  177307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:14:34.319401  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.319422  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.319860  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.320080  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.321977  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.323695  177307 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:14:34.322509  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.325025  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:14:34.325037  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:14:34.325053  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.323731  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.325089  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.323250  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.325250  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.325428  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.325563  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.328055  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.328364  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.328386  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.328712  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.328867  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.328980  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.329099  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.339175  177307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35817
	I1213 00:14:34.339820  177307 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:14:34.340300  177307 main.go:141] libmachine: Using API Version  1
	I1213 00:14:34.340314  177307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:14:34.340662  177307 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:14:34.340821  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetState
	I1213 00:14:34.342399  177307 main.go:141] libmachine: (no-preload-143586) Calling .DriverName
	I1213 00:14:34.342673  177307 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:14:34.342694  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:14:34.342720  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHHostname
	I1213 00:14:34.345475  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.345804  177307 main.go:141] libmachine: (no-preload-143586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:da:7b", ip: ""} in network mk-no-preload-143586: {Iface:virbr2 ExpiryTime:2023-12-13 00:59:29 +0000 UTC Type:0 Mac:52:54:00:4d:da:7b Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:no-preload-143586 Clientid:01:52:54:00:4d:da:7b}
	I1213 00:14:34.345839  177307 main.go:141] libmachine: (no-preload-143586) DBG | domain no-preload-143586 has defined IP address 192.168.50.181 and MAC address 52:54:00:4d:da:7b in network mk-no-preload-143586
	I1213 00:14:34.346062  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHPort
	I1213 00:14:34.346256  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHKeyPath
	I1213 00:14:34.346453  177307 main.go:141] libmachine: (no-preload-143586) Calling .GetSSHUsername
	I1213 00:14:34.346622  177307 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/no-preload-143586/id_rsa Username:docker}
	I1213 00:14:34.425634  177307 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-143586" context rescaled to 1 replicas
	I1213 00:14:34.425672  177307 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:14:34.427471  177307 out.go:177] * Verifying Kubernetes components...
	I1213 00:14:32.783642  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:34.786810  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:34.428983  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:34.589995  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:14:34.590692  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:14:34.592452  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:14:34.592472  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:14:34.643312  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:14:34.643336  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:14:34.649786  177307 node_ready.go:35] waiting up to 6m0s for node "no-preload-143586" to be "Ready" ...
	I1213 00:14:34.649926  177307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:14:34.683306  177307 node_ready.go:49] node "no-preload-143586" has status "Ready":"True"
	I1213 00:14:34.683339  177307 node_ready.go:38] duration metric: took 33.525188ms waiting for node "no-preload-143586" to be "Ready" ...
	I1213 00:14:34.683352  177307 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:34.711542  177307 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:14:34.711570  177307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:14:34.738788  177307 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-8fb8b" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:34.823110  177307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:14:35.743550  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.153515373s)
	I1213 00:14:35.743618  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.743634  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.743661  177307 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.093703901s)
	I1213 00:14:35.743611  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.152891747s)
	I1213 00:14:35.743699  177307 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1213 00:14:35.743719  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.743732  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.744060  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.744059  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.744088  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.744100  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.744114  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.744114  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.744158  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.744195  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.744209  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.744223  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.745779  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.745829  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.745855  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.745838  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.745797  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.745790  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:35.757271  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:35.757292  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:35.757758  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:35.757776  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:35.757787  177307 main.go:141] libmachine: (no-preload-143586) DBG | Closing plugin on server side
	I1213 00:14:36.114702  177307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291538738s)
	I1213 00:14:36.114760  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:36.114773  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:36.115132  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:36.115149  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:36.115158  177307 main.go:141] libmachine: Making call to close driver server
	I1213 00:14:36.115168  177307 main.go:141] libmachine: (no-preload-143586) Calling .Close
	I1213 00:14:36.115411  177307 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:14:36.115426  177307 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:14:36.115436  177307 addons.go:467] Verifying addon metrics-server=true in "no-preload-143586"
	I1213 00:14:36.117975  177307 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1213 00:14:36.119554  177307 addons.go:502] enable addons completed in 1.85448385s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1213 00:14:37.069993  177307 pod_ready.go:102] pod "coredns-76f75df574-8fb8b" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:38.563525  177307 pod_ready.go:92] pod "coredns-76f75df574-8fb8b" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.563551  177307 pod_ready.go:81] duration metric: took 3.824732725s waiting for pod "coredns-76f75df574-8fb8b" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.563561  177307 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hs9rg" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.565949  177307 pod_ready.go:97] error getting pod "coredns-76f75df574-hs9rg" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-hs9rg" not found
	I1213 00:14:38.565976  177307 pod_ready.go:81] duration metric: took 2.409349ms waiting for pod "coredns-76f75df574-hs9rg" in "kube-system" namespace to be "Ready" ...
	E1213 00:14:38.565984  177307 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-hs9rg" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-hs9rg" not found
	I1213 00:14:38.565990  177307 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.571396  177307 pod_ready.go:92] pod "etcd-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.571416  177307 pod_ready.go:81] duration metric: took 5.419634ms waiting for pod "etcd-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.571424  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.576228  177307 pod_ready.go:92] pod "kube-apiserver-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.576248  177307 pod_ready.go:81] duration metric: took 4.818853ms waiting for pod "kube-apiserver-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.576256  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.581260  177307 pod_ready.go:92] pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.581281  177307 pod_ready.go:81] duration metric: took 5.019621ms waiting for pod "kube-controller-manager-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.581289  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xsdtr" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.760984  177307 pod_ready.go:92] pod "kube-proxy-xsdtr" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:38.761006  177307 pod_ready.go:81] duration metric: took 179.711484ms waiting for pod "kube-proxy-xsdtr" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:38.761015  177307 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:39.160713  177307 pod_ready.go:92] pod "kube-scheduler-no-preload-143586" in "kube-system" namespace has status "Ready":"True"
	I1213 00:14:39.160738  177307 pod_ready.go:81] duration metric: took 399.716844ms waiting for pod "kube-scheduler-no-preload-143586" in "kube-system" namespace to be "Ready" ...
	I1213 00:14:39.160746  177307 pod_ready.go:38] duration metric: took 4.477382003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:14:39.160762  177307 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:14:39.160809  177307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:14:39.176747  177307 api_server.go:72] duration metric: took 4.751030848s to wait for apiserver process to appear ...
	I1213 00:14:39.176774  177307 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:14:39.176791  177307 api_server.go:253] Checking apiserver healthz at https://192.168.50.181:8443/healthz ...
	I1213 00:14:39.183395  177307 api_server.go:279] https://192.168.50.181:8443/healthz returned 200:
	ok
	I1213 00:14:39.184769  177307 api_server.go:141] control plane version: v1.29.0-rc.2
	I1213 00:14:39.184789  177307 api_server.go:131] duration metric: took 8.009007ms to wait for apiserver health ...
	I1213 00:14:39.184799  177307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:14:39.364215  177307 system_pods.go:59] 8 kube-system pods found
	I1213 00:14:39.364251  177307 system_pods.go:61] "coredns-76f75df574-8fb8b" [3ca4e237-6a35-4e8f-9731-3f9655eba995] Running
	I1213 00:14:39.364256  177307 system_pods.go:61] "etcd-no-preload-143586" [205757f5-5bd7-416c-af72-6ebf428d7302] Running
	I1213 00:14:39.364260  177307 system_pods.go:61] "kube-apiserver-no-preload-143586" [a479caf2-8ff4-4b78-8d93-e8f672a853b9] Running
	I1213 00:14:39.364265  177307 system_pods.go:61] "kube-controller-manager-no-preload-143586" [4afd5e8a-64b3-4e0c-a723-b6a6bd2445d4] Running
	I1213 00:14:39.364269  177307 system_pods.go:61] "kube-proxy-xsdtr" [23a261a4-17d1-4657-8052-02b71055c850] Running
	I1213 00:14:39.364273  177307 system_pods.go:61] "kube-scheduler-no-preload-143586" [71312243-0ba6-40e0-80a5-c1652b5270e9] Running
	I1213 00:14:39.364280  177307 system_pods.go:61] "metrics-server-57f55c9bc5-q7v45" [1579f5c9-d574-4ab8-9add-e89621b9c203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:39.364284  177307 system_pods.go:61] "storage-provisioner" [400b27cc-1713-4201-8097-3e3fd8004690] Running
	I1213 00:14:39.364292  177307 system_pods.go:74] duration metric: took 179.488069ms to wait for pod list to return data ...
	I1213 00:14:39.364301  177307 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:14:39.560330  177307 default_sa.go:45] found service account: "default"
	I1213 00:14:39.560364  177307 default_sa.go:55] duration metric: took 196.056049ms for default service account to be created ...
	I1213 00:14:39.560376  177307 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:14:39.763340  177307 system_pods.go:86] 8 kube-system pods found
	I1213 00:14:39.763384  177307 system_pods.go:89] "coredns-76f75df574-8fb8b" [3ca4e237-6a35-4e8f-9731-3f9655eba995] Running
	I1213 00:14:39.763393  177307 system_pods.go:89] "etcd-no-preload-143586" [205757f5-5bd7-416c-af72-6ebf428d7302] Running
	I1213 00:14:39.763400  177307 system_pods.go:89] "kube-apiserver-no-preload-143586" [a479caf2-8ff4-4b78-8d93-e8f672a853b9] Running
	I1213 00:14:39.763405  177307 system_pods.go:89] "kube-controller-manager-no-preload-143586" [4afd5e8a-64b3-4e0c-a723-b6a6bd2445d4] Running
	I1213 00:14:39.763409  177307 system_pods.go:89] "kube-proxy-xsdtr" [23a261a4-17d1-4657-8052-02b71055c850] Running
	I1213 00:14:39.763414  177307 system_pods.go:89] "kube-scheduler-no-preload-143586" [71312243-0ba6-40e0-80a5-c1652b5270e9] Running
	I1213 00:14:39.763426  177307 system_pods.go:89] "metrics-server-57f55c9bc5-q7v45" [1579f5c9-d574-4ab8-9add-e89621b9c203] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:14:39.763434  177307 system_pods.go:89] "storage-provisioner" [400b27cc-1713-4201-8097-3e3fd8004690] Running
	I1213 00:14:39.763449  177307 system_pods.go:126] duration metric: took 203.065345ms to wait for k8s-apps to be running ...
	I1213 00:14:39.763458  177307 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:14:39.763517  177307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:14:39.783072  177307 system_svc.go:56] duration metric: took 19.601725ms WaitForService to wait for kubelet.
	I1213 00:14:39.783120  177307 kubeadm.go:581] duration metric: took 5.357406192s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:14:39.783147  177307 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:14:39.962475  177307 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:14:39.962501  177307 node_conditions.go:123] node cpu capacity is 2
	I1213 00:14:39.962511  177307 node_conditions.go:105] duration metric: took 179.359327ms to run NodePressure ...
	I1213 00:14:39.962524  177307 start.go:228] waiting for startup goroutines ...
	I1213 00:14:39.962532  177307 start.go:233] waiting for cluster config update ...
	I1213 00:14:39.962544  177307 start.go:242] writing updated cluster config ...
	I1213 00:14:39.962816  177307 ssh_runner.go:195] Run: rm -f paused
	I1213 00:14:40.016206  177307 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1213 00:14:40.018375  177307 out.go:177] * Done! kubectl is now configured to use "no-preload-143586" cluster and "default" namespace by default
	I1213 00:14:37.286105  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:39.786060  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:42.285678  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:44.784213  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:47.285680  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:49.783428  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:51.785923  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:54.283780  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:56.783343  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:14:59.283053  176813 pod_ready.go:102] pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace has status "Ready":"False"
	I1213 00:15:00.976984  176813 pod_ready.go:81] duration metric: took 4m0.000041493s waiting for pod "metrics-server-74d5856cc6-fhn5s" in "kube-system" namespace to be "Ready" ...
	E1213 00:15:00.977016  176813 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1213 00:15:00.977037  176813 pod_ready.go:38] duration metric: took 4m1.19985839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:00.977064  176813 kubeadm.go:640] restartCluster took 5m6.659231001s
	W1213 00:15:00.977141  176813 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1213 00:15:00.977178  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 00:15:07.653665  176813 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.676456274s)
	I1213 00:15:07.653745  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:15:07.673981  176813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 00:15:07.688018  176813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 00:15:07.699196  176813 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 00:15:07.699244  176813 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1213 00:15:07.761890  176813 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1213 00:15:07.762010  176813 kubeadm.go:322] [preflight] Running pre-flight checks
	I1213 00:15:07.921068  176813 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 00:15:07.921220  176813 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 00:15:07.921360  176813 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 00:15:08.151937  176813 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 00:15:08.152063  176813 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 00:15:08.159296  176813 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1213 00:15:08.285060  176813 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 00:15:08.286903  176813 out.go:204]   - Generating certificates and keys ...
	I1213 00:15:08.287074  176813 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1213 00:15:08.287174  176813 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1213 00:15:08.290235  176813 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 00:15:08.290397  176813 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1213 00:15:08.290878  176813 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 00:15:08.291179  176813 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1213 00:15:08.291663  176813 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1213 00:15:08.292342  176813 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1213 00:15:08.292822  176813 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 00:15:08.293259  176813 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 00:15:08.293339  176813 kubeadm.go:322] [certs] Using the existing "sa" key
	I1213 00:15:08.293429  176813 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 00:15:08.526145  176813 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 00:15:08.586842  176813 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 00:15:08.636575  176813 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 00:15:08.706448  176813 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 00:15:08.710760  176813 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 00:15:08.713664  176813 out.go:204]   - Booting up control plane ...
	I1213 00:15:08.713773  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 00:15:08.718431  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 00:15:08.719490  176813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 00:15:08.720327  176813 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 00:15:08.722707  176813 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 00:15:19.226839  176813 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503804 seconds
	I1213 00:15:19.227005  176813 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 00:15:19.245054  176813 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 00:15:19.773910  176813 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 00:15:19.774100  176813 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-508612 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1213 00:15:20.284136  176813 kubeadm.go:322] [bootstrap-token] Using token: lgq05i.maaa534t8w734gvq
	I1213 00:15:20.286042  176813 out.go:204]   - Configuring RBAC rules ...
	I1213 00:15:20.286186  176813 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 00:15:20.297875  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 00:15:20.305644  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 00:15:20.314089  176813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 00:15:20.319091  176813 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 00:15:20.387872  176813 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1213 00:15:20.733546  176813 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1213 00:15:20.735072  176813 kubeadm.go:322] 
	I1213 00:15:20.735157  176813 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1213 00:15:20.735168  176813 kubeadm.go:322] 
	I1213 00:15:20.735280  176813 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1213 00:15:20.735291  176813 kubeadm.go:322] 
	I1213 00:15:20.735314  176813 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1213 00:15:20.735389  176813 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 00:15:20.735451  176813 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 00:15:20.735459  176813 kubeadm.go:322] 
	I1213 00:15:20.735517  176813 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1213 00:15:20.735602  176813 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 00:15:20.735660  176813 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 00:15:20.735666  176813 kubeadm.go:322] 
	I1213 00:15:20.735757  176813 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1213 00:15:20.735867  176813 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1213 00:15:20.735889  176813 kubeadm.go:322] 
	I1213 00:15:20.736036  176813 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lgq05i.maaa534t8w734gvq \
	I1213 00:15:20.736152  176813 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d \
	I1213 00:15:20.736223  176813 kubeadm.go:322]     --control-plane 	  
	I1213 00:15:20.736240  176813 kubeadm.go:322] 
	I1213 00:15:20.736348  176813 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1213 00:15:20.736357  176813 kubeadm.go:322] 
	I1213 00:15:20.736472  176813 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lgq05i.maaa534t8w734gvq \
	I1213 00:15:20.736596  176813 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa40a5589a05d32d81998c6d63deb022f8d335acef9de1c3b6ae77da0351268d 
	I1213 00:15:20.737307  176813 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 00:15:20.737332  176813 cni.go:84] Creating CNI manager for ""
	I1213 00:15:20.737340  176813 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 00:15:20.739085  176813 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 00:15:20.740295  176813 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 00:15:20.749618  176813 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1213 00:15:20.767876  176813 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 00:15:20.767933  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:20.767984  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446 minikube.k8s.io/name=old-k8s-version-508612 minikube.k8s.io/updated_at=2023_12_13T00_15_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.051677  176813 ops.go:34] apiserver oom_adj: -16
	I1213 00:15:21.051709  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.148546  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:21.741424  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:22.240885  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:22.741651  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:23.241662  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:23.741098  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:24.241530  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:24.741035  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:25.241391  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:25.741004  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:26.241402  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:26.741333  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:27.241828  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:27.741151  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:28.240933  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:28.741661  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:29.241431  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:29.741667  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:30.241070  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:30.741117  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:31.241355  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:31.741697  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:32.241779  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:32.741165  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:33.241739  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:33.741499  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:34.241477  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:34.740804  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:35.241596  176813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 00:15:35.374344  176813 kubeadm.go:1088] duration metric: took 14.606462065s to wait for elevateKubeSystemPrivileges.
	I1213 00:15:35.374388  176813 kubeadm.go:406] StartCluster complete in 5m41.120911791s
	I1213 00:15:35.374416  176813 settings.go:142] acquiring lock: {Name:mk332c5fbb2c25a150b94bd784d9d2d857d2da1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:15:35.374522  176813 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1213 00:15:35.376587  176813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/kubeconfig: {Name:mk50f960c26191ba9aa1285123bae9b745e10e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 00:15:35.376829  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 00:15:35.376896  176813 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1213 00:15:35.376998  176813 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377018  176813 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377026  176813 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-508612"
	W1213 00:15:35.377036  176813 addons.go:240] addon storage-provisioner should already be in state true
	I1213 00:15:35.377038  176813 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-508612"
	I1213 00:15:35.377075  176813 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-508612"
	W1213 00:15:35.377089  176813 addons.go:240] addon metrics-server should already be in state true
	I1213 00:15:35.377107  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.377140  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.377536  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.377569  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.377577  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.377603  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.377036  176813 config.go:182] Loaded profile config "old-k8s-version-508612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1213 00:15:35.377038  176813 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-508612"
	I1213 00:15:35.378232  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.378269  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.396758  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I1213 00:15:35.397242  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.397563  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
	I1213 00:15:35.397732  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I1213 00:15:35.398240  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.398249  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.398768  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.398789  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.398927  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.398944  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.399039  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.399048  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.399144  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399485  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399506  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.399699  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.399783  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.399822  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.400014  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.400052  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.403424  176813 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-508612"
	W1213 00:15:35.403445  176813 addons.go:240] addon default-storageclass should already be in state true
	I1213 00:15:35.403470  176813 host.go:66] Checking if "old-k8s-version-508612" exists ...
	I1213 00:15:35.403784  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.403809  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.419742  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I1213 00:15:35.419763  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I1213 00:15:35.420351  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.420378  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.420912  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.420927  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.421042  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.421062  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.421403  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.421450  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.421588  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.421633  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.422473  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I1213 00:15:35.423216  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.423818  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.423875  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.423890  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.426328  176813 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1213 00:15:35.424310  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.424522  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.428333  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 00:15:35.428351  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 00:15:35.428377  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.430256  176813 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 00:15:35.428950  176813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1213 00:15:35.430439  176813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1213 00:15:35.431959  176813 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:15:35.431260  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.431816  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.432011  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.431977  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 00:15:35.432031  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.432047  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.432199  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.432359  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.432587  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.434239  176813 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-508612" context rescaled to 1 replicas
	I1213 00:15:35.434275  176813 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 00:15:35.435769  176813 out.go:177] * Verifying Kubernetes components...
	I1213 00:15:35.437082  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:15:35.434982  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.435627  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.437148  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.437186  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.437343  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.437515  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.437646  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.450115  176813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I1213 00:15:35.450582  176813 main.go:141] libmachine: () Calling .GetVersion
	I1213 00:15:35.451077  176813 main.go:141] libmachine: Using API Version  1
	I1213 00:15:35.451104  176813 main.go:141] libmachine: () Calling .SetConfigRaw
	I1213 00:15:35.451548  176813 main.go:141] libmachine: () Calling .GetMachineName
	I1213 00:15:35.451822  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetState
	I1213 00:15:35.453721  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .DriverName
	I1213 00:15:35.454034  176813 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 00:15:35.454052  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 00:15:35.454072  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHHostname
	I1213 00:15:35.456976  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.457326  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:da:91", ip: ""} in network mk-old-k8s-version-508612: {Iface:virbr1 ExpiryTime:2023-12-13 01:09:35 +0000 UTC Type:0 Mac:52:54:00:dd:da:91 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:old-k8s-version-508612 Clientid:01:52:54:00:dd:da:91}
	I1213 00:15:35.457351  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | domain old-k8s-version-508612 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:da:91 in network mk-old-k8s-version-508612
	I1213 00:15:35.457530  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHPort
	I1213 00:15:35.457709  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHKeyPath
	I1213 00:15:35.457859  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .GetSSHUsername
	I1213 00:15:35.458008  176813 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/old-k8s-version-508612/id_rsa Username:docker}
	I1213 00:15:35.599631  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 00:15:35.607268  176813 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-508612" to be "Ready" ...
	I1213 00:15:35.607407  176813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 00:15:35.627686  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 00:15:35.627720  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1213 00:15:35.641865  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 00:15:35.653972  176813 node_ready.go:49] node "old-k8s-version-508612" has status "Ready":"True"
	I1213 00:15:35.654008  176813 node_ready.go:38] duration metric: took 46.699606ms waiting for node "old-k8s-version-508612" to be "Ready" ...
	I1213 00:15:35.654022  176813 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:35.701904  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 00:15:35.701939  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 00:15:35.722752  176813 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:35.779684  176813 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:15:35.779719  176813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 00:15:35.871071  176813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 00:15:36.486377  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486409  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486428  176813 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1213 00:15:36.486495  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486513  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486715  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.486725  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.486734  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486741  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.486816  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.486826  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.486834  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.486843  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.487015  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.487022  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.487048  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.487156  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.487172  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.487186  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.535004  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.535026  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.535335  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.535394  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.535407  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.671282  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.671308  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.671649  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.671719  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.671739  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.671758  176813 main.go:141] libmachine: Making call to close driver server
	I1213 00:15:36.671771  176813 main.go:141] libmachine: (old-k8s-version-508612) Calling .Close
	I1213 00:15:36.672067  176813 main.go:141] libmachine: Successfully made call to close driver server
	I1213 00:15:36.672091  176813 main.go:141] libmachine: Making call to close connection to plugin binary
	I1213 00:15:36.672092  176813 main.go:141] libmachine: (old-k8s-version-508612) DBG | Closing plugin on server side
	I1213 00:15:36.672102  176813 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-508612"
	I1213 00:15:36.673881  176813 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1213 00:15:36.675200  176813 addons.go:502] enable addons completed in 1.298322525s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1213 00:15:37.860212  176813 pod_ready.go:102] pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace has status "Ready":"False"
	I1213 00:15:40.350347  176813 pod_ready.go:92] pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace has status "Ready":"True"
	I1213 00:15:40.350370  176813 pod_ready.go:81] duration metric: took 4.627584432s waiting for pod "coredns-5644d7b6d9-4xsr7" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.350383  176813 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wz29m" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.356218  176813 pod_ready.go:92] pod "kube-proxy-wz29m" in "kube-system" namespace has status "Ready":"True"
	I1213 00:15:40.356240  176813 pod_ready.go:81] duration metric: took 5.84816ms waiting for pod "kube-proxy-wz29m" in "kube-system" namespace to be "Ready" ...
	I1213 00:15:40.356252  176813 pod_ready.go:38] duration metric: took 4.702215033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 00:15:40.356270  176813 api_server.go:52] waiting for apiserver process to appear ...
	I1213 00:15:40.356324  176813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 00:15:40.372391  176813 api_server.go:72] duration metric: took 4.938079614s to wait for apiserver process to appear ...
	I1213 00:15:40.372424  176813 api_server.go:88] waiting for apiserver healthz status ...
	I1213 00:15:40.372459  176813 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1213 00:15:40.378882  176813 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1213 00:15:40.379747  176813 api_server.go:141] control plane version: v1.16.0
	I1213 00:15:40.379770  176813 api_server.go:131] duration metric: took 7.338199ms to wait for apiserver health ...
	I1213 00:15:40.379780  176813 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 00:15:40.383090  176813 system_pods.go:59] 4 kube-system pods found
	I1213 00:15:40.383110  176813 system_pods.go:61] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.383115  176813 system_pods.go:61] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.383121  176813 system_pods.go:61] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.383126  176813 system_pods.go:61] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.383133  176813 system_pods.go:74] duration metric: took 3.346988ms to wait for pod list to return data ...
	I1213 00:15:40.383140  176813 default_sa.go:34] waiting for default service account to be created ...
	I1213 00:15:40.385822  176813 default_sa.go:45] found service account: "default"
	I1213 00:15:40.385843  176813 default_sa.go:55] duration metric: took 2.696485ms for default service account to be created ...
	I1213 00:15:40.385851  176813 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 00:15:40.390030  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.390056  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.390061  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.390068  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.390072  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.390094  176813 retry.go:31] will retry after 206.30305ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:40.602546  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.602577  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.602582  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.602589  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.602593  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.602611  176813 retry.go:31] will retry after 375.148566ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:40.987598  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:40.987626  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:40.987631  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:40.987639  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:40.987645  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:40.987663  176813 retry.go:31] will retry after 354.607581ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:41.347931  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:41.347965  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:41.347974  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:41.347984  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:41.347992  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:41.348012  176813 retry.go:31] will retry after 443.179207ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:41.796661  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:41.796687  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:41.796692  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:41.796711  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:41.796716  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:41.796733  176813 retry.go:31] will retry after 468.875458ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:42.271565  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:42.271591  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:42.271596  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:42.271603  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:42.271608  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:42.271624  176813 retry.go:31] will retry after 696.629881ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:42.974971  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:42.974997  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:42.975003  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:42.975009  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:42.975015  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:42.975031  176813 retry.go:31] will retry after 830.83436ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:43.810755  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:43.810784  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:43.810792  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:43.810802  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:43.810808  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:43.810830  176813 retry.go:31] will retry after 1.429308487s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:45.245813  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:45.245844  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:45.245852  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:45.245862  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:45.245867  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:45.245887  176813 retry.go:31] will retry after 1.715356562s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:46.966484  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:46.966512  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:46.966517  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:46.966523  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:46.966529  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:46.966546  176813 retry.go:31] will retry after 2.125852813s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:49.097419  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:49.097450  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:49.097460  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:49.097472  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:49.097478  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:49.097496  176813 retry.go:31] will retry after 2.902427415s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:52.005062  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:52.005097  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:52.005106  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:52.005119  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:52.005128  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:52.005154  176813 retry.go:31] will retry after 3.461524498s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:55.471450  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:55.471474  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:55.471480  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:55.471487  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:55.471492  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:55.471509  176813 retry.go:31] will retry after 2.969353102s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:15:58.445285  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:15:58.445316  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:15:58.445324  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:15:58.445334  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:15:58.445341  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:15:58.445363  176813 retry.go:31] will retry after 3.938751371s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:02.389811  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:02.389839  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:02.389845  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:02.389851  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:02.389856  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:02.389873  176813 retry.go:31] will retry after 5.281550171s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:07.676759  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:07.676786  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:07.676791  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:07.676798  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:07.676802  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:07.676820  176813 retry.go:31] will retry after 8.193775139s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:15.875917  176813 system_pods.go:86] 4 kube-system pods found
	I1213 00:16:15.875946  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:15.875951  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:15.875958  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:15.875962  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:15.875980  176813 retry.go:31] will retry after 8.515960159s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:24.397972  176813 system_pods.go:86] 5 kube-system pods found
	I1213 00:16:24.398006  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:24.398014  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:24.398021  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:24.398032  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:24.398039  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:24.398060  176813 retry.go:31] will retry after 10.707543157s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I1213 00:16:35.112639  176813 system_pods.go:86] 7 kube-system pods found
	I1213 00:16:35.112667  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:35.112672  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:35.112677  176813 system_pods.go:89] "kube-controller-manager-old-k8s-version-508612" [c6a195a2-e710-4791-b0a3-32618d3c752c] Running
	I1213 00:16:35.112681  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:35.112685  176813 system_pods.go:89] "kube-scheduler-old-k8s-version-508612" [6ee728bb-d81b-43f2-8aa3-4848871a8f41] Running
	I1213 00:16:35.112691  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:35.112696  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:35.112712  176813 retry.go:31] will retry after 13.429366805s: missing components: kube-apiserver
	I1213 00:16:48.550673  176813 system_pods.go:86] 8 kube-system pods found
	I1213 00:16:48.550704  176813 system_pods.go:89] "coredns-5644d7b6d9-4xsr7" [69174686-a36a-4b06-b723-0df832046815] Running
	I1213 00:16:48.550710  176813 system_pods.go:89] "etcd-old-k8s-version-508612" [de991ae9-f906-498b-bda3-3cf40035fd6a] Running
	I1213 00:16:48.550714  176813 system_pods.go:89] "kube-apiserver-old-k8s-version-508612" [1473501b-d17d-4bbb-a61a-1d244f54f70c] Running
	I1213 00:16:48.550718  176813 system_pods.go:89] "kube-controller-manager-old-k8s-version-508612" [c6a195a2-e710-4791-b0a3-32618d3c752c] Running
	I1213 00:16:48.550722  176813 system_pods.go:89] "kube-proxy-wz29m" [0efb7496-51ae-454f-88e9-cc8b6e9680c2] Running
	I1213 00:16:48.550726  176813 system_pods.go:89] "kube-scheduler-old-k8s-version-508612" [6ee728bb-d81b-43f2-8aa3-4848871a8f41] Running
	I1213 00:16:48.550733  176813 system_pods.go:89] "metrics-server-74d5856cc6-xcqf5" [ec91bd33-6503-4c42-b320-240f391ede74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 00:16:48.550737  176813 system_pods.go:89] "storage-provisioner" [6520d208-a69d-43ba-b107-1767102a62d4] Running
	I1213 00:16:48.550747  176813 system_pods.go:126] duration metric: took 1m8.164889078s to wait for k8s-apps to be running ...
	I1213 00:16:48.550756  176813 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 00:16:48.550811  176813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 00:16:48.568833  176813 system_svc.go:56] duration metric: took 18.062353ms WaitForService to wait for kubelet.
	I1213 00:16:48.568876  176813 kubeadm.go:581] duration metric: took 1m13.134572871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1213 00:16:48.568901  176813 node_conditions.go:102] verifying NodePressure condition ...
	I1213 00:16:48.573103  176813 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1213 00:16:48.573128  176813 node_conditions.go:123] node cpu capacity is 2
	I1213 00:16:48.573137  176813 node_conditions.go:105] duration metric: took 4.231035ms to run NodePressure ...
	I1213 00:16:48.573148  176813 start.go:228] waiting for startup goroutines ...
	I1213 00:16:48.573154  176813 start.go:233] waiting for cluster config update ...
	I1213 00:16:48.573163  176813 start.go:242] writing updated cluster config ...
	I1213 00:16:48.573436  176813 ssh_runner.go:195] Run: rm -f paused
	I1213 00:16:48.627109  176813 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1213 00:16:48.628688  176813 out.go:177] 
	W1213 00:16:48.630154  176813 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1213 00:16:48.631498  176813 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1213 00:16:48.633089  176813 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-508612" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-12-13 00:09:34 UTC, ends at Wed 2023-12-13 00:28:49 UTC. --
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.687124565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427329687102150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=90f53356-a5fc-4b25-9ec1-dbe2618bf375 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.687833363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=25005a70-4907-4c63-863f-5f5d0a6832f4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.687901161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=25005a70-4907-4c63-863f-5f5d0a6832f4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.688255725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebfeec5f1c53776f7879ac4df43b384de96608b731f6f9214de46f78addb5bea,PodSandboxId:a4317355d619c71c83818eab8136e0f00a4d80c92f98f32841a12cf4538c0239,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702426537962582180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wz29m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efb7496-51ae-454f-88e9-cc8b6e9680c2,},Annotations:map[string]string{io.kubernetes.container.hash: f9a8d0ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f9cca46c1cbbb355770d4ebb6032ce196df559af73e92da021f845d1b871df,PodSandboxId:9bb6d310d2d9d6ec22629660a6052619445b4464d96daa37dca3eb27a93b3020,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426537771103303,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6520d208-a69d-43ba-b107-1767102a62d4,},Annotations:map[string]string{io.kubernetes.container.hash: e9475179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ca2665660b00c673cfb2b2d0ae3fee983536ad6d8679a7811ffe8a3fc52515,PodSandboxId:aebd7a876fff617e2fbf3ba11fc2efe19cb6904afd9296d9c41cd9aa1dafdd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702426537209106347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4xsr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69174686-a36a-4b06-b723-0df832046815,},Annotations:map[string]string{io.kubernetes.container.hash: 479e2a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654928044f339493dc40cf5c825602a04575e244b1ec8da0467d52b751450877,PodSandboxId:864c02a6bf69dbf4acadee735fce57cd716c630b38e9a9b0861276376eaad3c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702426511386648716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b127afa6806b59f84e6ab3b018476fc4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 56252b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b73166520a4ebe05cdc279ad982fecc97db9e9240b2f71bb385a184ccfc76b,PodSandboxId:ca1a2c9e6ddb9483cc36af2ae1ee7a2f864e5272b86b2a08e0bd33aa4dbc5a17,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702426510108286185,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c36af79b91fb29ce09d38d1f84d66aa4d77969d6e4f3b59dbc1ba16417729a9,PodSandboxId:51ddc1aca59e145f83e8f5b5b31e5a1520017210f22b92f9f42f37df213936b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702426510077288631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d51289d3bc2e6fd6537f4477cf2fcd88c09e83d9e349289361f2b60f34e3a92,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702426509241347655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdd6940df379f70b0e8fb3128bfee7c86043aca24780c0d05a88d4dcad7e1cf3,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702426207108147850,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=25005a70-4907-4c63-863f-5f5d0a6832f4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.735327307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c4f789c2-10c0-4c7f-867c-6f0def257f2b name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.735391892Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c4f789c2-10c0-4c7f-867c-6f0def257f2b name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.737427594Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6da38c99-fa46-4cd0-982e-7f871c123f71 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.737833942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427329737818067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=6da38c99-fa46-4cd0-982e-7f871c123f71 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.738470455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f88e8ea0-28c9-46d9-9b18-ba56ed77b12c name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.738518605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f88e8ea0-28c9-46d9-9b18-ba56ed77b12c name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.738726908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebfeec5f1c53776f7879ac4df43b384de96608b731f6f9214de46f78addb5bea,PodSandboxId:a4317355d619c71c83818eab8136e0f00a4d80c92f98f32841a12cf4538c0239,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702426537962582180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wz29m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efb7496-51ae-454f-88e9-cc8b6e9680c2,},Annotations:map[string]string{io.kubernetes.container.hash: f9a8d0ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f9cca46c1cbbb355770d4ebb6032ce196df559af73e92da021f845d1b871df,PodSandboxId:9bb6d310d2d9d6ec22629660a6052619445b4464d96daa37dca3eb27a93b3020,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426537771103303,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6520d208-a69d-43ba-b107-1767102a62d4,},Annotations:map[string]string{io.kubernetes.container.hash: e9475179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ca2665660b00c673cfb2b2d0ae3fee983536ad6d8679a7811ffe8a3fc52515,PodSandboxId:aebd7a876fff617e2fbf3ba11fc2efe19cb6904afd9296d9c41cd9aa1dafdd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702426537209106347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4xsr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69174686-a36a-4b06-b723-0df832046815,},Annotations:map[string]string{io.kubernetes.container.hash: 479e2a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654928044f339493dc40cf5c825602a04575e244b1ec8da0467d52b751450877,PodSandboxId:864c02a6bf69dbf4acadee735fce57cd716c630b38e9a9b0861276376eaad3c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702426511386648716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b127afa6806b59f84e6ab3b018476fc4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 56252b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b73166520a4ebe05cdc279ad982fecc97db9e9240b2f71bb385a184ccfc76b,PodSandboxId:ca1a2c9e6ddb9483cc36af2ae1ee7a2f864e5272b86b2a08e0bd33aa4dbc5a17,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702426510108286185,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c36af79b91fb29ce09d38d1f84d66aa4d77969d6e4f3b59dbc1ba16417729a9,PodSandboxId:51ddc1aca59e145f83e8f5b5b31e5a1520017210f22b92f9f42f37df213936b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702426510077288631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d51289d3bc2e6fd6537f4477cf2fcd88c09e83d9e349289361f2b60f34e3a92,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702426509241347655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdd6940df379f70b0e8fb3128bfee7c86043aca24780c0d05a88d4dcad7e1cf3,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702426207108147850,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f88e8ea0-28c9-46d9-9b18-ba56ed77b12c name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.782317854Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fa344e71-d608-4584-b2d2-1ad874340d36 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.782373523Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fa344e71-d608-4584-b2d2-1ad874340d36 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.784451255Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7ad39e38-4d52-4d85-959d-a15b7b6db117 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.784831886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427329784818886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=7ad39e38-4d52-4d85-959d-a15b7b6db117 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.785551904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=00bbc89a-2e0b-4303-ab06-646968b250b2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.785599313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=00bbc89a-2e0b-4303-ab06-646968b250b2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.785798133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebfeec5f1c53776f7879ac4df43b384de96608b731f6f9214de46f78addb5bea,PodSandboxId:a4317355d619c71c83818eab8136e0f00a4d80c92f98f32841a12cf4538c0239,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702426537962582180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wz29m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efb7496-51ae-454f-88e9-cc8b6e9680c2,},Annotations:map[string]string{io.kubernetes.container.hash: f9a8d0ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f9cca46c1cbbb355770d4ebb6032ce196df559af73e92da021f845d1b871df,PodSandboxId:9bb6d310d2d9d6ec22629660a6052619445b4464d96daa37dca3eb27a93b3020,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426537771103303,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6520d208-a69d-43ba-b107-1767102a62d4,},Annotations:map[string]string{io.kubernetes.container.hash: e9475179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ca2665660b00c673cfb2b2d0ae3fee983536ad6d8679a7811ffe8a3fc52515,PodSandboxId:aebd7a876fff617e2fbf3ba11fc2efe19cb6904afd9296d9c41cd9aa1dafdd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702426537209106347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4xsr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69174686-a36a-4b06-b723-0df832046815,},Annotations:map[string]string{io.kubernetes.container.hash: 479e2a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654928044f339493dc40cf5c825602a04575e244b1ec8da0467d52b751450877,PodSandboxId:864c02a6bf69dbf4acadee735fce57cd716c630b38e9a9b0861276376eaad3c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702426511386648716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b127afa6806b59f84e6ab3b018476fc4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 56252b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b73166520a4ebe05cdc279ad982fecc97db9e9240b2f71bb385a184ccfc76b,PodSandboxId:ca1a2c9e6ddb9483cc36af2ae1ee7a2f864e5272b86b2a08e0bd33aa4dbc5a17,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702426510108286185,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c36af79b91fb29ce09d38d1f84d66aa4d77969d6e4f3b59dbc1ba16417729a9,PodSandboxId:51ddc1aca59e145f83e8f5b5b31e5a1520017210f22b92f9f42f37df213936b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702426510077288631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d51289d3bc2e6fd6537f4477cf2fcd88c09e83d9e349289361f2b60f34e3a92,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702426509241347655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdd6940df379f70b0e8fb3128bfee7c86043aca24780c0d05a88d4dcad7e1cf3,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702426207108147850,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=00bbc89a-2e0b-4303-ab06-646968b250b2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.823737474Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9fd3a219-665a-4349-afac-859c1a76ea40 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.823835257Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9fd3a219-665a-4349-afac-859c1a76ea40 name=/runtime.v1.RuntimeService/Version
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.825667319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bdf38359-cdc5-4008-8d1f-019482e44bb1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.826279063Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702427329826258761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=bdf38359-cdc5-4008-8d1f-019482e44bb1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.827134395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=209f067b-0794-408c-a5dc-3f34b0577dd3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.827199392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=209f067b-0794-408c-a5dc-3f34b0577dd3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 00:28:49 old-k8s-version-508612 crio[716]: time="2023-12-13 00:28:49.827447493Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebfeec5f1c53776f7879ac4df43b384de96608b731f6f9214de46f78addb5bea,PodSandboxId:a4317355d619c71c83818eab8136e0f00a4d80c92f98f32841a12cf4538c0239,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702426537962582180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wz29m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0efb7496-51ae-454f-88e9-cc8b6e9680c2,},Annotations:map[string]string{io.kubernetes.container.hash: f9a8d0ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f9cca46c1cbbb355770d4ebb6032ce196df559af73e92da021f845d1b871df,PodSandboxId:9bb6d310d2d9d6ec22629660a6052619445b4464d96daa37dca3eb27a93b3020,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702426537771103303,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6520d208-a69d-43ba-b107-1767102a62d4,},Annotations:map[string]string{io.kubernetes.container.hash: e9475179,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ca2665660b00c673cfb2b2d0ae3fee983536ad6d8679a7811ffe8a3fc52515,PodSandboxId:aebd7a876fff617e2fbf3ba11fc2efe19cb6904afd9296d9c41cd9aa1dafdd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702426537209106347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4xsr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69174686-a36a-4b06-b723-0df832046815,},Annotations:map[string]string{io.kubernetes.container.hash: 479e2a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654928044f339493dc40cf5c825602a04575e244b1ec8da0467d52b751450877,PodSandboxId:864c02a6bf69dbf4acadee735fce57cd716c630b38e9a9b0861276376eaad3c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702426511386648716,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b127afa6806b59f84e6ab3b018476fc4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 56252b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1b73166520a4ebe05cdc279ad982fecc97db9e9240b2f71bb385a184ccfc76b,PodSandboxId:ca1a2c9e6ddb9483cc36af2ae1ee7a2f864e5272b86b2a08e0bd33aa4dbc5a17,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702426510108286185,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c36af79b91fb29ce09d38d1f84d66aa4d77969d6e4f3b59dbc1ba16417729a9,PodSandboxId:51ddc1aca59e145f83e8f5b5b31e5a1520017210f22b92f9f42f37df213936b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702426510077288631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d51289d3bc2e6fd6537f4477cf2fcd88c09e83d9e349289361f2b60f34e3a92,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702426509241347655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdd6940df379f70b0e8fb3128bfee7c86043aca24780c0d05a88d4dcad7e1cf3,PodSandboxId:06b27d49cab092f9c749e5b8cf0ec53a152a6f8509ee480de11414738ec19f12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1702426207108147850,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-508612,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76fc4f862b1b155f977dd622de8fefee,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 452ed09a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=209f067b-0794-408c-a5dc-3f34b0577dd3 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ebfeec5f1c537       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   13 minutes ago      Running             kube-proxy                0                   a4317355d619c       kube-proxy-wz29m
	b7f9cca46c1cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   9bb6d310d2d9d       storage-provisioner
	a1ca2665660b0       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   13 minutes ago      Running             coredns                   0                   aebd7a876fff6       coredns-5644d7b6d9-4xsr7
	654928044f339       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   13 minutes ago      Running             etcd                      0                   864c02a6bf69d       etcd-old-k8s-version-508612
	a1b73166520a4       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   13 minutes ago      Running             kube-controller-manager   0                   ca1a2c9e6ddb9       kube-controller-manager-old-k8s-version-508612
	3c36af79b91fb       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   13 minutes ago      Running             kube-scheduler            0                   51ddc1aca59e1       kube-scheduler-old-k8s-version-508612
	7d51289d3bc2e       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   13 minutes ago      Running             kube-apiserver            1                   06b27d49cab09       kube-apiserver-old-k8s-version-508612
	fdd6940df379f       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   18 minutes ago      Exited              kube-apiserver            0                   06b27d49cab09       kube-apiserver-old-k8s-version-508612
	
	* 
	* ==> coredns [a1ca2665660b00c673cfb2b2d0ae3fee983536ad6d8679a7811ffe8a3fc52515] <==
	* .:53
	2023-12-13T00:15:37.549Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2023-12-13T00:15:37.549Z [INFO] CoreDNS-1.6.2
	2023-12-13T00:15:37.549Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-12-13T00:15:37.564Z [INFO] 127.0.0.1:41748 - 8538 "HINFO IN 2421315440976780902.6049602843531883062. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013914876s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-508612
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-508612
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a4cfdd7fe6105c8f2fb237e157ac115c68ce5446
	                    minikube.k8s.io/name=old-k8s-version-508612
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_13T00_15_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 13 Dec 2023 00:15:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 13 Dec 2023 00:28:16 +0000   Wed, 13 Dec 2023 00:15:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 13 Dec 2023 00:28:16 +0000   Wed, 13 Dec 2023 00:15:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 13 Dec 2023 00:28:16 +0000   Wed, 13 Dec 2023 00:15:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 13 Dec 2023 00:28:16 +0000   Wed, 13 Dec 2023 00:15:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    old-k8s-version-508612
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 dbb494d4ff9248d69186027f329440dc
	 System UUID:                dbb494d4-ff92-48d6-9186-027f329440dc
	 Boot ID:                    bec660e6-c313-4c0b-ad4b-987009402d14
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-4xsr7                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                etcd-old-k8s-version-508612                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-apiserver-old-k8s-version-508612             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-508612    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-wz29m                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-508612             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-xcqf5                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-508612     Node old-k8s-version-508612 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet, old-k8s-version-508612     Node old-k8s-version-508612 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-508612     Node old-k8s-version-508612 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-508612  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec13 00:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000002] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070237] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.591315] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.523909] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153431] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.968635] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.264735] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.122499] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.158630] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.142308] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.264716] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[Dec13 00:10] systemd-fstab-generator[1028]: Ignoring "noauto" for root device
	[  +0.469196] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.619889] kauditd_printk_skb: 13 callbacks suppressed
	[  +7.157409] kauditd_printk_skb: 2 callbacks suppressed
	[Dec13 00:15] systemd-fstab-generator[3070]: Ignoring "noauto" for root device
	[  +0.669740] kauditd_printk_skb: 6 callbacks suppressed
	[Dec13 00:16] kauditd_printk_skb: 4 callbacks suppressed
	
	* 
	* ==> etcd [654928044f339493dc40cf5c825602a04575e244b1ec8da0467d52b751450877] <==
	* 2023-12-13 00:15:11.494483 I | raft: d9e0442f914d2c09 became follower at term 0
	2023-12-13 00:15:11.494495 I | raft: newRaft d9e0442f914d2c09 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-12-13 00:15:11.494499 I | raft: d9e0442f914d2c09 became follower at term 1
	2023-12-13 00:15:11.503725 W | auth: simple token is not cryptographically signed
	2023-12-13 00:15:11.508316 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-12-13 00:15:11.509473 I | etcdserver: d9e0442f914d2c09 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-13 00:15:11.510125 I | etcdserver/membership: added member d9e0442f914d2c09 [https://192.168.39.70:2380] to cluster b9ca18127a3e3182
	2023-12-13 00:15:11.511082 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-13 00:15:11.511255 I | embed: listening for metrics on http://192.168.39.70:2381
	2023-12-13 00:15:11.511461 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-13 00:15:11.995500 I | raft: d9e0442f914d2c09 is starting a new election at term 1
	2023-12-13 00:15:11.995843 I | raft: d9e0442f914d2c09 became candidate at term 2
	2023-12-13 00:15:11.995953 I | raft: d9e0442f914d2c09 received MsgVoteResp from d9e0442f914d2c09 at term 2
	2023-12-13 00:15:11.995982 I | raft: d9e0442f914d2c09 became leader at term 2
	2023-12-13 00:15:11.996123 I | raft: raft.node: d9e0442f914d2c09 elected leader d9e0442f914d2c09 at term 2
	2023-12-13 00:15:11.996570 I | etcdserver: setting up the initial cluster version to 3.3
	2023-12-13 00:15:11.998124 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-12-13 00:15:11.998184 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-13 00:15:11.998208 I | etcdserver: published {Name:old-k8s-version-508612 ClientURLs:[https://192.168.39.70:2379]} to cluster b9ca18127a3e3182
	2023-12-13 00:15:11.998273 I | embed: ready to serve client requests
	2023-12-13 00:15:11.998975 I | embed: ready to serve client requests
	2023-12-13 00:15:11.999804 I | embed: serving client requests on 192.168.39.70:2379
	2023-12-13 00:15:12.001521 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-13 00:25:12.323422 I | mvcc: store.index: compact 661
	2023-12-13 00:25:12.325592 I | mvcc: finished scheduled compaction at 661 (took 1.579137ms)
	
	* 
	* ==> kernel <==
	*  00:28:50 up 19 min,  0 users,  load average: 0.27, 0.23, 0.19
	Linux old-k8s-version-508612 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7d51289d3bc2e6fd6537f4477cf2fcd88c09e83d9e349289361f2b60f34e3a92] <==
	* I1213 00:21:16.609237       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1213 00:21:16.609369       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 00:21:16.609433       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:21:16.609448       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:23:16.610518       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1213 00:23:16.610713       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 00:23:16.610811       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:23:16.610827       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:25:16.612276       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1213 00:25:16.612387       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 00:25:16.612446       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:25:16.612456       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:26:16.612840       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1213 00:26:16.612975       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 00:26:16.613111       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:26:16.613149       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1213 00:28:16.613628       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1213 00:28:16.613754       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 00:28:16.613823       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1213 00:28:16.613831       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-apiserver [fdd6940df379f70b0e8fb3128bfee7c86043aca24780c0d05a88d4dcad7e1cf3] <==
	* W1213 00:15:06.352611       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.352673       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.352705       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.352731       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.352793       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.352740       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.352859       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353006       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353275       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353578       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353645       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353667       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353725       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353746       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.353767       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354318       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354382       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354409       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354458       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354481       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354504       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354529       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:06.354584       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:07.634820       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1213 00:15:07.641362       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [a1b73166520a4ebe05cdc279ad982fecc97db9e9240b2f71bb385a184ccfc76b] <==
	* W1213 00:22:31.622357       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:22:39.134265       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:23:03.624451       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:23:09.386784       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:23:35.626936       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:23:39.638871       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:24:07.629491       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:24:09.890951       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:24:39.631502       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:24:40.143269       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1213 00:25:10.395151       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:25:11.633436       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:25:40.648195       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:25:43.635506       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:26:10.900686       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:26:15.637437       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:26:41.152832       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:26:47.640304       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:27:11.404898       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:27:19.642685       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:27:41.657197       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:27:51.644389       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:28:11.909603       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1213 00:28:23.646454       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1213 00:28:42.161836       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [ebfeec5f1c53776f7879ac4df43b384de96608b731f6f9214de46f78addb5bea] <==
	* W1213 00:15:38.194663       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1213 00:15:38.202616       1 node.go:135] Successfully retrieved node IP: 192.168.39.70
	I1213 00:15:38.202687       1 server_others.go:149] Using iptables Proxier.
	I1213 00:15:38.203191       1 server.go:529] Version: v1.16.0
	I1213 00:15:38.204761       1 config.go:131] Starting endpoints config controller
	I1213 00:15:38.204825       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1213 00:15:38.204843       1 config.go:313] Starting service config controller
	I1213 00:15:38.204853       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1213 00:15:38.305231       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1213 00:15:38.305573       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [3c36af79b91fb29ce09d38d1f84d66aa4d77969d6e4f3b59dbc1ba16417729a9] <==
	* I1213 00:15:15.593714       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1213 00:15:15.605612       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1213 00:15:15.648369       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 00:15:15.648477       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 00:15:15.648511       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 00:15:15.648544       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 00:15:15.648574       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 00:15:15.650725       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 00:15:15.650799       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1213 00:15:15.650840       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1213 00:15:15.650890       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 00:15:15.651308       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 00:15:15.651506       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1213 00:15:16.651497       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 00:15:16.653129       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 00:15:16.653870       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 00:15:16.654679       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 00:15:16.657128       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 00:15:16.659979       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 00:15:16.660869       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1213 00:15:16.661987       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 00:15:16.665279       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 00:15:16.666769       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1213 00:15:16.666840       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1213 00:15:35.269373       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-12-13 00:09:34 UTC, ends at Wed 2023-12-13 00:28:50 UTC. --
	Dec 13 00:24:17 old-k8s-version-508612 kubelet[3076]: E1213 00:24:17.787305    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:24:32 old-k8s-version-508612 kubelet[3076]: E1213 00:24:32.787001    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:24:43 old-k8s-version-508612 kubelet[3076]: E1213 00:24:43.786865    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:24:55 old-k8s-version-508612 kubelet[3076]: E1213 00:24:55.791462    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:25:07 old-k8s-version-508612 kubelet[3076]: E1213 00:25:07.786972    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:25:08 old-k8s-version-508612 kubelet[3076]: E1213 00:25:08.867874    3076 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Dec 13 00:25:21 old-k8s-version-508612 kubelet[3076]: E1213 00:25:21.786844    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:25:35 old-k8s-version-508612 kubelet[3076]: E1213 00:25:35.786864    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:25:49 old-k8s-version-508612 kubelet[3076]: E1213 00:25:49.787443    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:26:00 old-k8s-version-508612 kubelet[3076]: E1213 00:26:00.787151    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:26:15 old-k8s-version-508612 kubelet[3076]: E1213 00:26:15.797706    3076 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 13 00:26:15 old-k8s-version-508612 kubelet[3076]: E1213 00:26:15.797785    3076 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 13 00:26:15 old-k8s-version-508612 kubelet[3076]: E1213 00:26:15.797828    3076 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 13 00:26:15 old-k8s-version-508612 kubelet[3076]: E1213 00:26:15.797862    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 13 00:26:30 old-k8s-version-508612 kubelet[3076]: E1213 00:26:30.788169    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:26:42 old-k8s-version-508612 kubelet[3076]: E1213 00:26:42.790549    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:26:57 old-k8s-version-508612 kubelet[3076]: E1213 00:26:57.787809    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:27:12 old-k8s-version-508612 kubelet[3076]: E1213 00:27:12.787489    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:27:27 old-k8s-version-508612 kubelet[3076]: E1213 00:27:27.787289    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:27:41 old-k8s-version-508612 kubelet[3076]: E1213 00:27:41.786726    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:27:55 old-k8s-version-508612 kubelet[3076]: E1213 00:27:55.787171    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:28:07 old-k8s-version-508612 kubelet[3076]: E1213 00:28:07.786933    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:28:21 old-k8s-version-508612 kubelet[3076]: E1213 00:28:21.786713    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:28:32 old-k8s-version-508612 kubelet[3076]: E1213 00:28:32.786869    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 13 00:28:44 old-k8s-version-508612 kubelet[3076]: E1213 00:28:44.787180    3076 pod_workers.go:191] Error syncing pod ec91bd33-6503-4c42-b320-240f391ede74 ("metrics-server-74d5856cc6-xcqf5_kube-system(ec91bd33-6503-4c42-b320-240f391ede74)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [b7f9cca46c1cbbb355770d4ebb6032ce196df559af73e92da021f845d1b871df] <==
	* I1213 00:15:37.913568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 00:15:37.930431       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 00:15:37.930609       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 00:15:37.981995       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 00:15:37.982521       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e1a7b4b6-fff3-46ce-a8ea-3cbbb6c64a75", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-508612_76c671eb-525c-45d0-99b7-29d2ca8eea49 became leader
	I1213 00:15:37.982573       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-508612_76c671eb-525c-45d0-99b7-29d2ca8eea49!
	I1213 00:15:38.092270       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-508612_76c671eb-525c-45d0-99b7-29d2ca8eea49!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-508612 -n old-k8s-version-508612
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-508612 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-xcqf5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-508612 describe pod metrics-server-74d5856cc6-xcqf5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-508612 describe pod metrics-server-74d5856cc6-xcqf5: exit status 1 (66.252322ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-xcqf5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-508612 describe pod metrics-server-74d5856cc6-xcqf5: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (179.27s)

                                                
                                    

Test pass (234/299)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 53.92
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 16.86
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
17 TestDownloadOnly/v1.29.0-rc.2/json-events 44.5
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.14
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
26 TestBinaryMirror 0.57
27 TestOffline 107.19
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
32 TestAddons/Setup 218.75
34 TestAddons/parallel/Registry 22.9
36 TestAddons/parallel/InspektorGadget 11.4
37 TestAddons/parallel/MetricsServer 6.09
38 TestAddons/parallel/HelmTiller 19.22
40 TestAddons/parallel/CSI 53.67
41 TestAddons/parallel/Headlamp 15.48
42 TestAddons/parallel/CloudSpanner 5.73
43 TestAddons/parallel/LocalPath 62.84
44 TestAddons/parallel/NvidiaDevicePlugin 5.64
47 TestAddons/serial/GCPAuth/Namespaces 0.11
49 TestCertOptions 60.47
50 TestCertExpiration 293.91
52 TestForceSystemdFlag 88.77
53 TestForceSystemdEnv 97.15
55 TestKVMDriverInstallOrUpdate 3.1
59 TestErrorSpam/setup 47.53
60 TestErrorSpam/start 0.38
61 TestErrorSpam/status 0.77
62 TestErrorSpam/pause 1.62
63 TestErrorSpam/unpause 1.79
64 TestErrorSpam/stop 2.26
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 63.85
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 36.3
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.15
76 TestFunctional/serial/CacheCmd/cache/add_local 2.32
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 34.57
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.54
87 TestFunctional/serial/LogsFileCmd 1.57
88 TestFunctional/serial/InvalidService 4.53
90 TestFunctional/parallel/ConfigCmd 0.4
91 TestFunctional/parallel/DashboardCmd 44.78
92 TestFunctional/parallel/DryRun 0.33
93 TestFunctional/parallel/InternationalLanguage 0.18
94 TestFunctional/parallel/StatusCmd 1.24
98 TestFunctional/parallel/ServiceCmdConnect 12.67
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 57.02
102 TestFunctional/parallel/SSHCmd 0.47
103 TestFunctional/parallel/CpCmd 1.58
104 TestFunctional/parallel/MySQL 29.34
105 TestFunctional/parallel/FileSync 0.35
106 TestFunctional/parallel/CertSync 2.09
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
114 TestFunctional/parallel/License 0.64
124 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
126 TestFunctional/parallel/ProfileCmd/profile_list 0.37
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
128 TestFunctional/parallel/MountCmd/any-port 9.87
129 TestFunctional/parallel/ServiceCmd/List 0.29
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
132 TestFunctional/parallel/MountCmd/specific-port 1.99
133 TestFunctional/parallel/ServiceCmd/Format 0.44
134 TestFunctional/parallel/ServiceCmd/URL 0.44
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.65
136 TestFunctional/parallel/Version/short 0.07
137 TestFunctional/parallel/Version/components 0.87
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.39
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
142 TestFunctional/parallel/ImageCommands/ImageBuild 5.16
143 TestFunctional/parallel/ImageCommands/Setup 2.19
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.02
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.11
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 12.14
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.53
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.93
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.32
154 TestFunctional/delete_addon-resizer_images 0.06
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestIngressAddonLegacy/StartLegacyK8sCluster 119.39
162 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.47
163 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.65
167 TestJSONOutput/start/Command 61.91
168 TestJSONOutput/start/Audit 0
170 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/pause/Command 0.69
174 TestJSONOutput/pause/Audit 0
176 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/unpause/Command 0.65
180 TestJSONOutput/unpause/Audit 0
182 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/stop/Command 7.1
186 TestJSONOutput/stop/Audit 0
188 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
190 TestErrorJSONOutput 0.22
195 TestMainNoArgs 0.06
196 TestMinikubeProfile 95.98
199 TestMountStart/serial/StartWithMountFirst 27.91
200 TestMountStart/serial/VerifyMountFirst 0.42
201 TestMountStart/serial/StartWithMountSecond 27.73
202 TestMountStart/serial/VerifyMountSecond 0.4
203 TestMountStart/serial/DeleteFirst 0.69
204 TestMountStart/serial/VerifyMountPostDelete 0.41
205 TestMountStart/serial/Stop 1.12
206 TestMountStart/serial/RestartStopped 25.32
207 TestMountStart/serial/VerifyMountPostStop 0.42
210 TestMultiNode/serial/FreshStart2Nodes 114.76
211 TestMultiNode/serial/DeployApp2Nodes 7.98
213 TestMultiNode/serial/AddNode 45.19
214 TestMultiNode/serial/MultiNodeLabels 0.07
215 TestMultiNode/serial/ProfileList 0.22
216 TestMultiNode/serial/CopyFile 7.7
217 TestMultiNode/serial/StopNode 2.99
218 TestMultiNode/serial/StartAfterStop 31.83
220 TestMultiNode/serial/DeleteNode 1.8
222 TestMultiNode/serial/RestartMultiNode 454.56
223 TestMultiNode/serial/ValidateNameConflict 48.99
230 TestScheduledStopUnix 118.29
236 TestKubernetesUpgrade 190.81
239 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
250 TestNoKubernetes/serial/StartWithK8s 77.66
255 TestNetworkPlugins/group/false 3.57
259 TestNoKubernetes/serial/StartWithStopK8s 16.32
260 TestNoKubernetes/serial/Start 32.15
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
262 TestNoKubernetes/serial/ProfileList 0.45
263 TestNoKubernetes/serial/Stop 1.19
264 TestNoKubernetes/serial/StartNoArgs 78.43
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
267 TestPause/serial/Start 128.34
268 TestStoppedBinaryUpgrade/Setup 2.18
272 TestStartStop/group/old-k8s-version/serial/FirstStart 122.56
274 TestStartStop/group/no-preload/serial/FirstStart 169.59
276 TestStartStop/group/embed-certs/serial/FirstStart 146.36
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.43
278 TestStartStop/group/old-k8s-version/serial/DeployApp 10.99
280 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 62.74
281 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
283 TestStartStop/group/embed-certs/serial/DeployApp 12.6
284 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.25
286 TestStartStop/group/no-preload/serial/DeployApp 12.92
287 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.42
288 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
290 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
293 TestStartStop/group/old-k8s-version/serial/SecondStart 793.19
295 TestStartStop/group/embed-certs/serial/SecondStart 593.4
298 TestStartStop/group/no-preload/serial/SecondStart 611.08
299 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 574.42
309 TestStartStop/group/newest-cni/serial/FirstStart 60.06
310 TestNetworkPlugins/group/auto/Start 125.17
311 TestNetworkPlugins/group/kindnet/Start 110
312 TestStartStop/group/newest-cni/serial/DeployApp 0
313 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.92
314 TestStartStop/group/newest-cni/serial/Stop 2.41
315 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
316 TestStartStop/group/newest-cni/serial/SecondStart 66.21
317 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
318 TestNetworkPlugins/group/calico/Start 100.54
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
320 TestNetworkPlugins/group/kindnet/NetCatPod 13.43
321 TestNetworkPlugins/group/auto/KubeletFlags 0.28
322 TestNetworkPlugins/group/auto/NetCatPod 13.57
323 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
324 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
325 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
326 TestStartStop/group/newest-cni/serial/Pause 3.18
327 TestNetworkPlugins/group/custom-flannel/Start 107.36
328 TestNetworkPlugins/group/kindnet/DNS 0.21
329 TestNetworkPlugins/group/kindnet/Localhost 0.16
330 TestNetworkPlugins/group/kindnet/HairPin 0.18
331 TestNetworkPlugins/group/auto/DNS 0.18
332 TestNetworkPlugins/group/auto/Localhost 0.16
333 TestNetworkPlugins/group/auto/HairPin 0.14
334 TestNetworkPlugins/group/enable-default-cni/Start 127.62
335 TestNetworkPlugins/group/flannel/Start 131.21
336 TestNetworkPlugins/group/calico/ControllerPod 5.02
337 TestNetworkPlugins/group/calico/KubeletFlags 0.22
338 TestNetworkPlugins/group/calico/NetCatPod 12.39
339 TestNetworkPlugins/group/calico/DNS 0.24
340 TestNetworkPlugins/group/calico/Localhost 0.22
341 TestNetworkPlugins/group/calico/HairPin 0.19
342 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
343 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.33
344 TestNetworkPlugins/group/custom-flannel/DNS 0.25
345 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
346 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
347 TestNetworkPlugins/group/bridge/Start 106.82
348 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
349 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.36
350 TestNetworkPlugins/group/flannel/ControllerPod 5.03
351 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
352 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
353 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
354 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
355 TestNetworkPlugins/group/flannel/NetCatPod 12.37
356 TestNetworkPlugins/group/flannel/DNS 0.21
357 TestNetworkPlugins/group/flannel/Localhost 0.15
358 TestNetworkPlugins/group/flannel/HairPin 0.18
359 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
360 TestNetworkPlugins/group/bridge/NetCatPod 12.36
361 TestNetworkPlugins/group/bridge/DNS 0.17
362 TestNetworkPlugins/group/bridge/Localhost 0.14
363 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.16.0/json-events (53.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-647419 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-647419 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (53.916503045s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (53.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-647419
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-647419: exit status 85 (76.316041ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-647419 | jenkins | v1.32.0 | 12 Dec 23 22:54 UTC |          |
	|         | -p download-only-647419        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:54:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:54:36.549968  143553 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:54:36.550145  143553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:54:36.550154  143553 out.go:309] Setting ErrFile to fd 2...
	I1212 22:54:36.550161  143553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:54:36.550340  143553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	W1212 22:54:36.550482  143553 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17777-136241/.minikube/config/config.json: open /home/jenkins/minikube-integration/17777-136241/.minikube/config/config.json: no such file or directory
	I1212 22:54:36.551092  143553 out.go:303] Setting JSON to true
	I1212 22:54:36.551968  143553 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5825,"bootTime":1702415852,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:54:36.552025  143553 start.go:138] virtualization: kvm guest
	I1212 22:54:36.554522  143553 out.go:97] [download-only-647419] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:54:36.556167  143553 out.go:169] MINIKUBE_LOCATION=17777
	W1212 22:54:36.554641  143553 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 22:54:36.554675  143553 notify.go:220] Checking for updates...
	I1212 22:54:36.559009  143553 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:54:36.560370  143553 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 22:54:36.561657  143553 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 22:54:36.562833  143553 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 22:54:36.565147  143553 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 22:54:36.565358  143553 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:54:36.600592  143553 out.go:97] Using the kvm2 driver based on user configuration
	I1212 22:54:36.600630  143553 start.go:298] selected driver: kvm2
	I1212 22:54:36.600639  143553 start.go:902] validating driver "kvm2" against <nil>
	I1212 22:54:36.600930  143553 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:54:36.600993  143553 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 22:54:36.615229  143553 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 22:54:36.615286  143553 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:54:36.615750  143553 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1212 22:54:36.615897  143553 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 22:54:36.615985  143553 cni.go:84] Creating CNI manager for ""
	I1212 22:54:36.615998  143553 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:54:36.616010  143553 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 22:54:36.616017  143553 start_flags.go:323] config:
	{Name:download-only-647419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-647419 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:54:36.616217  143553 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:54:36.618050  143553 out.go:97] Downloading VM boot image ...
	I1212 22:54:36.618085  143553 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 22:54:45.896919  143553 out.go:97] Starting control plane node download-only-647419 in cluster download-only-647419
	I1212 22:54:45.896956  143553 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 22:54:46.007236  143553 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1212 22:54:46.007272  143553 cache.go:56] Caching tarball of preloaded images
	I1212 22:54:46.007537  143553 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 22:54:46.009541  143553 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1212 22:54:46.009573  143553 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:54:46.124377  143553 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1212 22:55:01.697194  143553 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:55:01.697298  143553 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:55:02.573743  143553 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1212 22:55:02.574117  143553 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/download-only-647419/config.json ...
	I1212 22:55:02.574153  143553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/download-only-647419/config.json: {Name:mk1d4040fa460134f4de43f4b81cb36e845b8cc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:55:02.574312  143553 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 22:55:02.574498  143553 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-647419"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (16.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-647419 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-647419 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.859385s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (16.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-647419
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-647419: exit status 85 (75.20836ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-647419 | jenkins | v1.32.0 | 12 Dec 23 22:54 UTC |          |
	|         | -p download-only-647419        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-647419 | jenkins | v1.32.0 | 12 Dec 23 22:55 UTC |          |
	|         | -p download-only-647419        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:55:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:55:30.542488  143702 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:55:30.542767  143702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:55:30.542778  143702 out.go:309] Setting ErrFile to fd 2...
	I1212 22:55:30.542783  143702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:55:30.543031  143702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	W1212 22:55:30.543176  143702 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17777-136241/.minikube/config/config.json: open /home/jenkins/minikube-integration/17777-136241/.minikube/config/config.json: no such file or directory
	I1212 22:55:30.543661  143702 out.go:303] Setting JSON to true
	I1212 22:55:30.544604  143702 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5879,"bootTime":1702415852,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:55:30.544665  143702 start.go:138] virtualization: kvm guest
	I1212 22:55:30.546566  143702 out.go:97] [download-only-647419] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:55:30.548217  143702 out.go:169] MINIKUBE_LOCATION=17777
	I1212 22:55:30.546806  143702 notify.go:220] Checking for updates...
	I1212 22:55:30.550999  143702 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:55:30.552570  143702 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 22:55:30.554085  143702 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 22:55:30.555422  143702 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 22:55:30.558034  143702 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 22:55:30.558495  143702 config.go:182] Loaded profile config "download-only-647419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1212 22:55:30.558559  143702 start.go:810] api.Load failed for download-only-647419: filestore "download-only-647419": Docker machine "download-only-647419" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:55:30.558648  143702 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 22:55:30.558690  143702 start.go:810] api.Load failed for download-only-647419: filestore "download-only-647419": Docker machine "download-only-647419" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:55:30.590002  143702 out.go:97] Using the kvm2 driver based on existing profile
	I1212 22:55:30.590034  143702 start.go:298] selected driver: kvm2
	I1212 22:55:30.590041  143702 start.go:902] validating driver "kvm2" against &{Name:download-only-647419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-647419 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:55:30.590455  143702 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:55:30.590557  143702 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 22:55:30.604493  143702 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 22:55:30.605525  143702 cni.go:84] Creating CNI manager for ""
	I1212 22:55:30.605547  143702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:55:30.605561  143702 start_flags.go:323] config:
	{Name:download-only-647419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-647419 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:55:30.605790  143702 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:55:30.607466  143702 out.go:97] Starting control plane node download-only-647419 in cluster download-only-647419
	I1212 22:55:30.607479  143702 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:55:31.112301  143702 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 22:55:31.112333  143702 cache.go:56] Caching tarball of preloaded images
	I1212 22:55:31.112485  143702 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:55:31.114406  143702 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1212 22:55:31.114422  143702 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:55:31.225630  143702 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 22:55:45.236948  143702 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:55:45.237050  143702 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-647419"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (44.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-647419 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-647419 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (44.496922649s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (44.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-647419
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-647419: exit status 85 (74.338805ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-647419 | jenkins | v1.32.0 | 12 Dec 23 22:54 UTC |          |
	|         | -p download-only-647419           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-647419 | jenkins | v1.32.0 | 12 Dec 23 22:55 UTC |          |
	|         | -p download-only-647419           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-647419 | jenkins | v1.32.0 | 12 Dec 23 22:55 UTC |          |
	|         | -p download-only-647419           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:55:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:55:47.480182  143781 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:55:47.480384  143781 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:55:47.480396  143781 out.go:309] Setting ErrFile to fd 2...
	I1212 22:55:47.480401  143781 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:55:47.480616  143781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	W1212 22:55:47.480773  143781 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17777-136241/.minikube/config/config.json: open /home/jenkins/minikube-integration/17777-136241/.minikube/config/config.json: no such file or directory
	I1212 22:55:47.481190  143781 out.go:303] Setting JSON to true
	I1212 22:55:47.482058  143781 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5896,"bootTime":1702415852,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:55:47.482116  143781 start.go:138] virtualization: kvm guest
	I1212 22:55:47.484134  143781 out.go:97] [download-only-647419] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:55:47.485722  143781 out.go:169] MINIKUBE_LOCATION=17777
	I1212 22:55:47.484340  143781 notify.go:220] Checking for updates...
	I1212 22:55:47.488578  143781 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:55:47.490295  143781 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 22:55:47.491835  143781 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 22:55:47.493246  143781 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 22:55:47.496005  143781 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 22:55:47.496512  143781 config.go:182] Loaded profile config "download-only-647419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1212 22:55:47.496577  143781 start.go:810] api.Load failed for download-only-647419: filestore "download-only-647419": Docker machine "download-only-647419" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:55:47.496672  143781 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 22:55:47.496716  143781 start.go:810] api.Load failed for download-only-647419: filestore "download-only-647419": Docker machine "download-only-647419" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:55:47.528585  143781 out.go:97] Using the kvm2 driver based on existing profile
	I1212 22:55:47.528610  143781 start.go:298] selected driver: kvm2
	I1212 22:55:47.528619  143781 start.go:902] validating driver "kvm2" against &{Name:download-only-647419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-647419 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:55:47.529027  143781 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:55:47.529109  143781 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17777-136241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 22:55:47.543305  143781 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 22:55:47.544117  143781 cni.go:84] Creating CNI manager for ""
	I1212 22:55:47.544137  143781 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:55:47.544151  143781 start_flags.go:323] config:
	{Name:download-only-647419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-647419 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:55:47.544297  143781 iso.go:125] acquiring lock: {Name:mkaf0c5717f5bf6253bd7ebf86eb20a82d195bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:55:47.546010  143781 out.go:97] Starting control plane node download-only-647419 in cluster download-only-647419
	I1212 22:55:47.546025  143781 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 22:55:48.050493  143781 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 22:55:48.050533  143781 cache.go:56] Caching tarball of preloaded images
	I1212 22:55:48.050700  143781 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 22:55:48.052815  143781 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1212 22:55:48.052843  143781 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:55:48.165863  143781 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:4677ed63f210d912abc47b8c2f7401f7 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 22:56:01.480749  143781 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:56:01.480850  143781 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17777-136241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:56:02.271384  143781 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1212 22:56:02.271514  143781 profile.go:148] Saving config to /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/download-only-647419/config.json ...
	I1212 22:56:02.271724  143781 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 22:56:02.271917  143781 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17777-136241/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-647419"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-647419
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-191948 --alsologtostderr --binary-mirror http://127.0.0.1:40327 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-191948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-191948
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (107.19s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-245437 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-245437 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m46.053536571s)
helpers_test.go:175: Cleaning up "offline-crio-245437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-245437
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-245437: (1.138303008s)
--- PASS: TestOffline (107.19s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-577685
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-577685: exit status 85 (69.311595ms)

                                                
                                                
-- stdout --
	* Profile "addons-577685" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-577685"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-577685
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-577685: exit status 85 (67.86518ms)

                                                
                                                
-- stdout --
	* Profile "addons-577685" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-577685"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (218.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-577685 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-577685 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m38.752230517s)
--- PASS: TestAddons/Setup (218.75s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 36.01791ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-hqwg4" [d0bf4dcc-a461-4ab3-b7cd-a50f0b4d61c4] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.023413185s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fptb7" [c0a8fb28-ceaa-4e60-8815-9440f1f663a1] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014509655s
addons_test.go:339: (dbg) Run:  kubectl --context addons-577685 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-577685 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-577685 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (11.973219723s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-577685 ip
2023/12/12 23:00:34 [DEBUG] GET http://192.168.39.136:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-577685 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (22.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.4s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xwc7r" [6c7d806c-98c0-4645-9fac-593b1c135c03] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.015179465s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-577685
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-577685: (6.383881559s)
--- PASS: TestAddons/parallel/InspektorGadget (11.40s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.09s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 36.113285ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-lclrb" [55901824-a685-464c-908b-469b9b6eb95f] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.025677107s
addons_test.go:414: (dbg) Run:  kubectl --context addons-577685 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-577685 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.09s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (19.22s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 36.157162ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-kjkq6" [d4f500ad-4a08-4478-af71-f772ba964f09] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.020665075s
addons_test.go:472: (dbg) Run:  kubectl --context addons-577685 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-577685 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (13.177273728s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-577685 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (19.22s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 37.626716ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-577685 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-577685 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [efcf1d43-ea0c-45cc-97f1-a930f4c1f16a] Pending
helpers_test.go:344: "task-pv-pod" [efcf1d43-ea0c-45cc-97f1-a930f4c1f16a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [efcf1d43-ea0c-45cc-97f1-a930f4c1f16a] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.02986564s
addons_test.go:583: (dbg) Run:  kubectl --context addons-577685 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-577685 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-577685 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-577685 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-577685 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-577685 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-577685 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-577685 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [053b6d5b-92b3-4722-b488-599e03d4f1f5] Pending
helpers_test.go:344: "task-pv-pod-restore" [053b6d5b-92b3-4722-b488-599e03d4f1f5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [053b6d5b-92b3-4722-b488-599e03d4f1f5] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.030424525s
addons_test.go:625: (dbg) Run:  kubectl --context addons-577685 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-577685 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-577685 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-577685 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-577685 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.904308742s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-577685 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.67s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-577685 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-577685 --alsologtostderr -v=1: (1.449740426s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-2dshq" [a98eeba9-4220-4a45-9383-ca3970d3c877] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-2dshq" [a98eeba9-4220-4a45-9383-ca3970d3c877] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-2dshq" [a98eeba9-4220-4a45-9383-ca3970d3c877] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.026322344s
--- PASS: TestAddons/parallel/Headlamp (15.48s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-rxwjn" [1b0d1570-5599-43da-9a9c-6178f1b87a4f] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.024212289s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-577685
--- PASS: TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (62.84s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-577685 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-577685 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c457740c-32e8-46c8-b0a1-1331d749f9a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c457740c-32e8-46c8-b0a1-1331d749f9a1] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c457740c-32e8-46c8-b0a1-1331d749f9a1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.011308442s
addons_test.go:890: (dbg) Run:  kubectl --context addons-577685 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-577685 ssh "cat /opt/local-path-provisioner/pvc-4645bbf6-7858-4980-ba0f-98b14aad17a1_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-577685 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-577685 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-577685 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-577685 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.563131794s)
--- PASS: TestAddons/parallel/LocalPath (62.84s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-knlgj" [44d91221-4176-4754-8d10-d474c4c15c2f] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.021807419s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-577685
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-577685 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-577685 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (60.47s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-643716 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-643716 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (59.080900369s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-643716 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-643716 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-643716 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-643716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-643716
--- PASS: TestCertOptions (60.47s)

                                                
                                    
x
+
TestCertExpiration (293.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-380248 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-380248 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m32.253898582s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-380248 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-380248 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (20.590150586s)
helpers_test.go:175: Cleaning up "cert-expiration-380248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-380248
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-380248: (1.06153938s)
--- PASS: TestCertExpiration (293.91s)

                                                
                                    
x
+
TestForceSystemdFlag (88.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-527166 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-527166 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m27.685114057s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-527166 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-527166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-527166
--- PASS: TestForceSystemdFlag (88.77s)

                                                
                                    
x
+
TestForceSystemdEnv (97.15s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-222167 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1212 23:52:45.320705  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-222167 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m36.113633991s)
helpers_test.go:175: Cleaning up "force-systemd-env-222167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-222167
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-222167: (1.040268498s)
--- PASS: TestForceSystemdEnv (97.15s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.1s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.10s)

                                                
                                    
x
+
TestErrorSpam/setup (47.53s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-267474 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-267474 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-267474 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-267474 --driver=kvm2  --container-runtime=crio: (47.532381985s)
--- PASS: TestErrorSpam/setup (47.53s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (2.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 stop: (2.098560727s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-267474 --log_dir /tmp/nospam-267474 stop
--- PASS: TestErrorSpam/stop (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17777-136241/.minikube/files/etc/test/nested/copy/143541/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-579382 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-579382 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m3.846059576s)
--- PASS: TestFunctional/serial/StartWithProxy (63.85s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-579382 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-579382 --alsologtostderr -v=8: (36.296865177s)
functional_test.go:659: soft start took 36.297551863s for "functional-579382" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-579382 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-579382 cache add registry.k8s.io/pause:3.3: (1.125097012s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-579382 cache add registry.k8s.io/pause:latest: (1.027014447s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-579382 /tmp/TestFunctionalserialCacheCmdcacheadd_local4250621037/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 cache add minikube-local-cache-test:functional-579382
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-579382 cache add minikube-local-cache-test:functional-579382: (1.984182952s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 cache delete minikube-local-cache-test:functional-579382
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-579382
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-579382 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (236.366034ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 kubectl -- --context functional-579382 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-579382 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-579382 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-579382 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.566466868s)
functional_test.go:757: restart took 34.566616488s for "functional-579382" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-579382 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-579382 logs: (1.53980922s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 logs --file /tmp/TestFunctionalserialLogsFileCmd1950854052/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-579382 logs --file /tmp/TestFunctionalserialLogsFileCmd1950854052/001/logs.txt: (1.567110002s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.53s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-579382 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-579382
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-579382: exit status 115 (306.636018ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.69:31881 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-579382 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-579382 config get cpus: exit status 14 (61.346588ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-579382 config get cpus: exit status 14 (60.238635ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (44.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-579382 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-579382 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 151242: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (44.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-579382 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-579382 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (162.596999ms)

                                                
                                                
-- stdout --
	* [functional-579382] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17777
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 23:09:41.426183  150676 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:09:41.426440  150676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:09:41.426448  150676 out.go:309] Setting ErrFile to fd 2...
	I1212 23:09:41.426453  150676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:09:41.426629  150676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1212 23:09:41.427147  150676 out.go:303] Setting JSON to false
	I1212 23:09:41.428101  150676 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6730,"bootTime":1702415852,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:09:41.428177  150676 start.go:138] virtualization: kvm guest
	I1212 23:09:41.430229  150676 out.go:177] * [functional-579382] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:09:41.431898  150676 notify.go:220] Checking for updates...
	I1212 23:09:41.433505  150676 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 23:09:41.434926  150676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:09:41.436382  150676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:09:41.437817  150676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:09:41.439466  150676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:09:41.440906  150676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:09:41.442732  150676 config.go:182] Loaded profile config "functional-579382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:09:41.443149  150676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:09:41.443211  150676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:09:41.458283  150676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I1212 23:09:41.458711  150676 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:09:41.459248  150676 main.go:141] libmachine: Using API Version  1
	I1212 23:09:41.459279  150676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:09:41.459622  150676 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:09:41.459796  150676 main.go:141] libmachine: (functional-579382) Calling .DriverName
	I1212 23:09:41.460037  150676 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:09:41.460302  150676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:09:41.460337  150676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:09:41.475669  150676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37003
	I1212 23:09:41.476137  150676 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:09:41.476605  150676 main.go:141] libmachine: Using API Version  1
	I1212 23:09:41.476625  150676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:09:41.476964  150676 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:09:41.477154  150676 main.go:141] libmachine: (functional-579382) Calling .DriverName
	I1212 23:09:41.513825  150676 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 23:09:41.515358  150676 start.go:298] selected driver: kvm2
	I1212 23:09:41.515380  150676 start.go:902] validating driver "kvm2" against &{Name:functional-579382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-579382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.69 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:09:41.515542  150676 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:09:41.518102  150676 out.go:177] 
	W1212 23:09:41.519496  150676 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 23:09:41.520837  150676 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-579382 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-579382 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-579382 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (180.564836ms)

                                                
                                                
-- stdout --
	* [functional-579382] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17777
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 23:09:41.255246  150621 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:09:41.255434  150621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:09:41.255463  150621 out.go:309] Setting ErrFile to fd 2...
	I1212 23:09:41.255480  150621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:09:41.255800  150621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1212 23:09:41.256369  150621 out.go:303] Setting JSON to false
	I1212 23:09:41.257443  150621 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6729,"bootTime":1702415852,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:09:41.257523  150621 start.go:138] virtualization: kvm guest
	I1212 23:09:41.259727  150621 out.go:177] * [functional-579382] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1212 23:09:41.261812  150621 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 23:09:41.263234  150621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:09:41.261900  150621 notify.go:220] Checking for updates...
	I1212 23:09:41.264665  150621 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:09:41.266101  150621 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:09:41.267480  150621 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:09:41.269455  150621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:09:41.271445  150621 config.go:182] Loaded profile config "functional-579382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:09:41.272007  150621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:09:41.272088  150621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:09:41.294295  150621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I1212 23:09:41.294791  150621 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:09:41.295412  150621 main.go:141] libmachine: Using API Version  1
	I1212 23:09:41.295442  150621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:09:41.295764  150621 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:09:41.295953  150621 main.go:141] libmachine: (functional-579382) Calling .DriverName
	I1212 23:09:41.296159  150621 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:09:41.298767  150621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:09:41.298822  150621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:09:41.313678  150621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37729
	I1212 23:09:41.314132  150621 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:09:41.314585  150621 main.go:141] libmachine: Using API Version  1
	I1212 23:09:41.314606  150621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:09:41.314943  150621 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:09:41.315144  150621 main.go:141] libmachine: (functional-579382) Calling .DriverName
	I1212 23:09:41.349548  150621 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1212 23:09:41.350971  150621 start.go:298] selected driver: kvm2
	I1212 23:09:41.350984  150621 start.go:902] validating driver "kvm2" against &{Name:functional-579382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-579382 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.69 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:09:41.351098  150621 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:09:41.353497  150621 out.go:177] 
	W1212 23:09:41.354891  150621 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 23:09:41.356263  150621 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-579382 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-579382 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-zm9rq" [b7d3fe0b-9b8b-4f75-8eb5-942f699c75f7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-zm9rq" [b7d3fe0b-9b8b-4f75-8eb5-942f699c75f7] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.031309656s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.69:32376
functional_test.go:1674: http://192.168.50.69:32376: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-zm9rq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.69:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.69:32376
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (57.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [644ee3bf-6e7b-41ea-a9ce-e23c88b6899a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.023714595s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-579382 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-579382 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-579382 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-579382 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-579382 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eeec2a03-2df2-448d-803d-039c0d0c335f] Pending
helpers_test.go:344: "sp-pod" [eeec2a03-2df2-448d-803d-039c0d0c335f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [eeec2a03-2df2-448d-803d-039c0d0c335f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.02791288s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-579382 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-579382 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-579382 delete -f testdata/storage-provisioner/pod.yaml: (4.065790627s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-579382 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f6cddffc-ebe5-48ea-ae03-9027d03272bb] Pending
helpers_test.go:344: "sp-pod" [f6cddffc-ebe5-48ea-ae03-9027d03272bb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f6cddffc-ebe5-48ea-ae03-9027d03272bb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.05518511s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-579382 exec sp-pod -- ls /tmp/mount
2023/12/12 23:10:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (57.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh -n functional-579382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 cp functional-579382:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2639270189/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh -n functional-579382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh -n functional-579382 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-579382 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-xlrrz" [20b33d16-7a1f-4d3d-a91e-a0493726cbee] Pending
helpers_test.go:344: "mysql-859648c796-xlrrz" [20b33d16-7a1f-4d3d-a91e-a0493726cbee] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-xlrrz" [20b33d16-7a1f-4d3d-a91e-a0493726cbee] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.035792499s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-579382 exec mysql-859648c796-xlrrz -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-579382 exec mysql-859648c796-xlrrz -- mysql -ppassword -e "show databases;": exit status 1 (366.735124ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-579382 exec mysql-859648c796-xlrrz -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-579382 exec mysql-859648c796-xlrrz -- mysql -ppassword -e "show databases;": exit status 1 (175.195626ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-579382 exec mysql-859648c796-xlrrz -- mysql -ppassword -e "show databases;"
E1212 23:10:12.123199  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-579382 exec mysql-859648c796-xlrrz -- mysql -ppassword -e "show databases;": exit status 1 (190.536408ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1212 23:10:12.443791  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
functional_test.go:1806: (dbg) Run:  kubectl --context functional-579382 exec mysql-859648c796-xlrrz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.34s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/143541/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "sudo cat /etc/test/nested/copy/143541/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/143541.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "sudo cat /etc/ssl/certs/143541.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/143541.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "sudo cat /usr/share/ca-certificates/143541.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/1435412.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "sudo cat /etc/ssl/certs/1435412.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/1435412.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "sudo cat /usr/share/ca-certificates/1435412.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-579382 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-579382 ssh "sudo systemctl is-active docker": exit status 1 (326.307494ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-579382 ssh "sudo systemctl is-active containerd": exit status 1 (289.757315ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-579382 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-579382 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-kvrc8" [9e7b3739-19b8-414b-807e-7db3b4534467] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-kvrc8" [9e7b3739-19b8-414b-807e-7db3b4534467] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.014429469s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "304.905965ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "68.423371ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "286.424036ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "67.69763ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-579382 /tmp/TestFunctionalparallelMountCmdany-port3129036471/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702422570197293522" to /tmp/TestFunctionalparallelMountCmdany-port3129036471/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702422570197293522" to /tmp/TestFunctionalparallelMountCmdany-port3129036471/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702422570197293522" to /tmp/TestFunctionalparallelMountCmdany-port3129036471/001/test-1702422570197293522
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-579382 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (278.277325ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 23:09 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 23:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 23:09 test-1702422570197293522
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh cat /mount-9p/test-1702422570197293522
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-579382 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [14af590c-2ef9-42ac-bcef-fddefe4f5242] Pending
helpers_test.go:344: "busybox-mount" [14af590c-2ef9-42ac-bcef-fddefe4f5242] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [14af590c-2ef9-42ac-bcef-fddefe4f5242] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [14af590c-2ef9-42ac-bcef-fddefe4f5242] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.018836321s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-579382 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-579382 /tmp/TestFunctionalparallelMountCmdany-port3129036471/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 service list -o json
functional_test.go:1493: Took "280.899039ms" to run "out/minikube-linux-amd64 -p functional-579382 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.69:30738
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-579382 /tmp/TestFunctionalparallelMountCmdspecific-port467500156/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-579382 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (305.601604ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-579382 /tmp/TestFunctionalparallelMountCmdspecific-port467500156/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-579382 ssh "sudo umount -f /mount-9p": exit status 1 (277.637467ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-579382 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-579382 /tmp/TestFunctionalparallelMountCmdspecific-port467500156/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.69:30738
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-579382 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2357224829/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-579382 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2357224829/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-579382 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2357224829/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-579382 ssh "findmnt -T" /mount1: exit status 1 (321.948027ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-579382 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-579382 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2357224829/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-579382 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2357224829/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-579382 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2357224829/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 version --short
E1212 23:10:13.084099  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image ls --format short --alsologtostderr
E1212 23:10:14.364987  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-579382 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-579382
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-579382
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-579382 image ls --format short --alsologtostderr:
I1212 23:10:14.400477  151927 out.go:296] Setting OutFile to fd 1 ...
I1212 23:10:14.400818  151927 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 23:10:14.400833  151927 out.go:309] Setting ErrFile to fd 2...
I1212 23:10:14.400840  151927 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 23:10:14.401156  151927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
I1212 23:10:14.402004  151927 config.go:182] Loaded profile config "functional-579382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 23:10:14.402175  151927 config.go:182] Loaded profile config "functional-579382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 23:10:14.402747  151927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 23:10:14.402815  151927 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 23:10:14.417047  151927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40517
I1212 23:10:14.417549  151927 main.go:141] libmachine: () Calling .GetVersion
I1212 23:10:14.418216  151927 main.go:141] libmachine: Using API Version  1
I1212 23:10:14.418248  151927 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 23:10:14.418600  151927 main.go:141] libmachine: () Calling .GetMachineName
I1212 23:10:14.418813  151927 main.go:141] libmachine: (functional-579382) Calling .GetState
I1212 23:10:14.420741  151927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 23:10:14.420794  151927 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 23:10:14.434568  151927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
I1212 23:10:14.435049  151927 main.go:141] libmachine: () Calling .GetVersion
I1212 23:10:14.435672  151927 main.go:141] libmachine: Using API Version  1
I1212 23:10:14.435725  151927 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 23:10:14.436123  151927 main.go:141] libmachine: () Calling .GetMachineName
I1212 23:10:14.436302  151927 main.go:141] libmachine: (functional-579382) Calling .DriverName
I1212 23:10:14.436559  151927 ssh_runner.go:195] Run: systemctl --version
I1212 23:10:14.436593  151927 main.go:141] libmachine: (functional-579382) Calling .GetSSHHostname
I1212 23:10:14.439382  151927 main.go:141] libmachine: (functional-579382) DBG | domain functional-579382 has defined MAC address 52:54:00:31:b4:9c in network mk-functional-579382
I1212 23:10:14.439771  151927 main.go:141] libmachine: (functional-579382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:b4:9c", ip: ""} in network mk-functional-579382: {Iface:virbr1 ExpiryTime:2023-12-13 00:07:13 +0000 UTC Type:0 Mac:52:54:00:31:b4:9c Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:functional-579382 Clientid:01:52:54:00:31:b4:9c}
I1212 23:10:14.439808  151927 main.go:141] libmachine: (functional-579382) DBG | domain functional-579382 has defined IP address 192.168.50.69 and MAC address 52:54:00:31:b4:9c in network mk-functional-579382
I1212 23:10:14.439922  151927 main.go:141] libmachine: (functional-579382) Calling .GetSSHPort
I1212 23:10:14.440095  151927 main.go:141] libmachine: (functional-579382) Calling .GetSSHKeyPath
I1212 23:10:14.440250  151927 main.go:141] libmachine: (functional-579382) Calling .GetSSHUsername
I1212 23:10:14.440404  151927 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/functional-579382/id_rsa Username:docker}
I1212 23:10:14.572934  151927 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 23:10:14.634654  151927 main.go:141] libmachine: Making call to close driver server
I1212 23:10:14.634667  151927 main.go:141] libmachine: (functional-579382) Calling .Close
I1212 23:10:14.634981  151927 main.go:141] libmachine: Successfully made call to close driver server
I1212 23:10:14.635019  151927 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 23:10:14.635030  151927 main.go:141] libmachine: Making call to close driver server
I1212 23:10:14.635044  151927 main.go:141] libmachine: (functional-579382) Calling .Close
I1212 23:10:14.635293  151927 main.go:141] libmachine: (functional-579382) DBG | Closing plugin on server side
I1212 23:10:14.635334  151927 main.go:141] libmachine: Successfully made call to close driver server
I1212 23:10:14.635348  151927 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-579382 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/google-containers/addon-resizer  | functional-579382  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | bdba757bc9336 | 520MB  |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-579382  | 6e8b4c4d94f1f | 3.35kB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-579382 image ls --format table --alsologtostderr:
I1212 23:10:15.606211  152037 out.go:296] Setting OutFile to fd 1 ...
I1212 23:10:15.606360  152037 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 23:10:15.606370  152037 out.go:309] Setting ErrFile to fd 2...
I1212 23:10:15.606375  152037 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 23:10:15.606562  152037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
I1212 23:10:15.607110  152037 config.go:182] Loaded profile config "functional-579382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 23:10:15.607216  152037 config.go:182] Loaded profile config "functional-579382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 23:10:15.607565  152037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 23:10:15.607618  152037 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 23:10:15.622035  152037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
I1212 23:10:15.622536  152037 main.go:141] libmachine: () Calling .GetVersion
I1212 23:10:15.623136  152037 main.go:141] libmachine: Using API Version  1
I1212 23:10:15.623163  152037 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 23:10:15.623510  152037 main.go:141] libmachine: () Calling .GetMachineName
I1212 23:10:15.623718  152037 main.go:141] libmachine: (functional-579382) Calling .GetState
I1212 23:10:15.625762  152037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 23:10:15.625811  152037 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 23:10:15.639844  152037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42541
I1212 23:10:15.640251  152037 main.go:141] libmachine: () Calling .GetVersion
I1212 23:10:15.640892  152037 main.go:141] libmachine: Using API Version  1
I1212 23:10:15.640925  152037 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 23:10:15.641279  152037 main.go:141] libmachine: () Calling .GetMachineName
I1212 23:10:15.641459  152037 main.go:141] libmachine: (functional-579382) Calling .DriverName
I1212 23:10:15.641653  152037 ssh_runner.go:195] Run: systemctl --version
I1212 23:10:15.641675  152037 main.go:141] libmachine: (functional-579382) Calling .GetSSHHostname
I1212 23:10:15.644483  152037 main.go:141] libmachine: (functional-579382) DBG | domain functional-579382 has defined MAC address 52:54:00:31:b4:9c in network mk-functional-579382
I1212 23:10:15.644915  152037 main.go:141] libmachine: (functional-579382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:b4:9c", ip: ""} in network mk-functional-579382: {Iface:virbr1 ExpiryTime:2023-12-13 00:07:13 +0000 UTC Type:0 Mac:52:54:00:31:b4:9c Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:functional-579382 Clientid:01:52:54:00:31:b4:9c}
I1212 23:10:15.644943  152037 main.go:141] libmachine: (functional-579382) DBG | domain functional-579382 has defined IP address 192.168.50.69 and MAC address 52:54:00:31:b4:9c in network mk-functional-579382
I1212 23:10:15.645141  152037 main.go:141] libmachine: (functional-579382) Calling .GetSSHPort
I1212 23:10:15.645313  152037 main.go:141] libmachine: (functional-579382) Calling .GetSSHKeyPath
I1212 23:10:15.645540  152037 main.go:141] libmachine: (functional-579382) Calling .GetSSHUsername
I1212 23:10:15.645708  152037 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/functional-579382/id_rsa Username:docker}
I1212 23:10:15.766971  152037 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 23:10:15.839766  152037 main.go:141] libmachine: Making call to close driver server
I1212 23:10:15.839788  152037 main.go:141] libmachine: (functional-579382) Calling .Close
I1212 23:10:15.840069  152037 main.go:141] libmachine: (functional-579382) DBG | Closing plugin on server side
I1212 23:10:15.840108  152037 main.go:141] libmachine: Successfully made call to close driver server
I1212 23:10:15.840121  152037 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 23:10:15.840141  152037 main.go:141] libmachine: Making call to close driver server
I1212 23:10:15.840154  152037 main.go:141] libmachine: (functional-579382) Calling .Close
I1212 23:10:15.840353  152037 main.go:141] libmachine: Successfully made call to close driver server
I1212 23:10:15.840367  152037 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-579382 image ls --format json --alsologtostderr:
[{"id":"6e8b4c4d94f1f3c14a3cc12e9f3ec3779288e7a433d3b4e32c8f7c5b45233149","repoDigests":["localhost/minikube-local-cache-test@sha256:b728fd9995c9603a3bfee24e9a9b2f6dc6d3f9a3adcad2730cb2af9a704e63fd"],"repoTags":["localhost/minikube-local-cache-test:functional-579382"],"size":"3345"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io
/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":["docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0
924a7ef357513ecc7a3","docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519653829"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa
4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-579382"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.i
o/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45
bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","rep
oDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-579382 image ls --format json --alsologtostderr:
I1212 23:10:15.908519  152059 out.go:296] Setting OutFile to fd 1 ...
I1212 23:10:15.908730  152059 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 23:10:15.908745  152059 out.go:309] Setting ErrFile to fd 2...
I1212 23:10:15.908754  152059 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 23:10:15.909076  152059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
I1212 23:10:15.909911  152059 config.go:182] Loaded profile config "functional-579382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 23:10:15.910093  152059 config.go:182] Loaded profile config "functional-579382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 23:10:15.910694  152059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 23:10:15.910756  152059 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 23:10:15.924936  152059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39479
I1212 23:10:15.925438  152059 main.go:141] libmachine: () Calling .GetVersion
I1212 23:10:15.925960  152059 main.go:141] libmachine: Using API Version  1
I1212 23:10:15.925984  152059 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 23:10:15.926386  152059 main.go:141] libmachine: () Calling .GetMachineName
I1212 23:10:15.926647  152059 main.go:141] libmachine: (functional-579382) Calling .GetState
I1212 23:10:15.928511  152059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 23:10:15.928553  152059 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 23:10:15.942772  152059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
I1212 23:10:15.943383  152059 main.go:141] libmachine: () Calling .GetVersion
I1212 23:10:15.943986  152059 main.go:141] libmachine: Using API Version  1
I1212 23:10:15.944028  152059 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 23:10:15.944394  152059 main.go:141] libmachine: () Calling .GetMachineName
I1212 23:10:15.944588  152059 main.go:141] libmachine: (functional-579382) Calling .DriverName
I1212 23:10:15.944805  152059 ssh_runner.go:195] Run: systemctl --version
I1212 23:10:15.944826  152059 main.go:141] libmachine: (functional-579382) Calling .GetSSHHostname
I1212 23:10:15.947806  152059 main.go:141] libmachine: (functional-579382) DBG | domain functional-579382 has defined MAC address 52:54:00:31:b4:9c in network mk-functional-579382
I1212 23:10:15.948251  152059 main.go:141] libmachine: (functional-579382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:b4:9c", ip: ""} in network mk-functional-579382: {Iface:virbr1 ExpiryTime:2023-12-13 00:07:13 +0000 UTC Type:0 Mac:52:54:00:31:b4:9c Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:functional-579382 Clientid:01:52:54:00:31:b4:9c}
I1212 23:10:15.948284  152059 main.go:141] libmachine: (functional-579382) DBG | domain functional-579382 has defined IP address 192.168.50.69 and MAC address 52:54:00:31:b4:9c in network mk-functional-579382
I1212 23:10:15.948460  152059 main.go:141] libmachine: (functional-579382) Calling .GetSSHPort
I1212 23:10:15.948615  152059 main.go:141] libmachine: (functional-579382) Calling .GetSSHKeyPath
I1212 23:10:15.948779  152059 main.go:141] libmachine: (functional-579382) Calling .GetSSHUsername
I1212 23:10:15.948917  152059 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/functional-579382/id_rsa Username:docker}
I1212 23:10:16.127359  152059 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 23:10:16.230922  152059 main.go:141] libmachine: Making call to close driver server
I1212 23:10:16.230935  152059 main.go:141] libmachine: (functional-579382) Calling .Close
I1212 23:10:16.231270  152059 main.go:141] libmachine: Successfully made call to close driver server
I1212 23:10:16.231312  152059 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 23:10:16.231323  152059 main.go:141] libmachine: Making call to close driver server
I1212 23:10:16.231329  152059 main.go:141] libmachine: (functional-579382) Calling .Close
I1212 23:10:16.231335  152059 main.go:141] libmachine: (functional-579382) DBG | Closing plugin on server side
I1212 23:10:16.231580  152059 main.go:141] libmachine: (functional-579382) DBG | Closing plugin on server side
I1212 23:10:16.231598  152059 main.go:141] libmachine: Successfully made call to close driver server
I1212 23:10:16.231615  152059 main.go:141] libmachine: Making call to close connection to plugin binary
E1212 23:10:16.925616  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-579382 image ls --format yaml --alsologtostderr:
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests:
- docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3
- docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1
repoTags:
- docker.io/library/mysql:5.7
size: "519653829"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-579382
size: "34114467"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 6e8b4c4d94f1f3c14a3cc12e9f3ec3779288e7a433d3b4e32c8f7c5b45233149
repoDigests:
- localhost/minikube-local-cache-test@sha256:b728fd9995c9603a3bfee24e9a9b2f6dc6d3f9a3adcad2730cb2af9a704e63fd
repoTags:
- localhost/minikube-local-cache-test:functional-579382
size: "3345"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-579382 image ls --format yaml --alsologtostderr:
I1212 23:10:14.696861  151951 out.go:296] Setting OutFile to fd 1 ...
I1212 23:10:14.697120  151951 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 23:10:14.697128  151951 out.go:309] Setting ErrFile to fd 2...
I1212 23:10:14.697132  151951 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 23:10:14.697327  151951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
I1212 23:10:14.697915  151951 config.go:182] Loaded profile config "functional-579382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 23:10:14.698019  151951 config.go:182] Loaded profile config "functional-579382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 23:10:14.698368  151951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 23:10:14.698410  151951 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 23:10:14.712341  151951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
I1212 23:10:14.712818  151951 main.go:141] libmachine: () Calling .GetVersion
I1212 23:10:14.713390  151951 main.go:141] libmachine: Using API Version  1
I1212 23:10:14.713417  151951 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 23:10:14.713805  151951 main.go:141] libmachine: () Calling .GetMachineName
I1212 23:10:14.714029  151951 main.go:141] libmachine: (functional-579382) Calling .GetState
I1212 23:10:14.715686  151951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 23:10:14.715737  151951 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 23:10:14.729636  151951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46587
I1212 23:10:14.730007  151951 main.go:141] libmachine: () Calling .GetVersion
I1212 23:10:14.730519  151951 main.go:141] libmachine: Using API Version  1
I1212 23:10:14.730552  151951 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 23:10:14.730869  151951 main.go:141] libmachine: () Calling .GetMachineName
I1212 23:10:14.731064  151951 main.go:141] libmachine: (functional-579382) Calling .DriverName
I1212 23:10:14.731269  151951 ssh_runner.go:195] Run: systemctl --version
I1212 23:10:14.731296  151951 main.go:141] libmachine: (functional-579382) Calling .GetSSHHostname
I1212 23:10:14.733898  151951 main.go:141] libmachine: (functional-579382) DBG | domain functional-579382 has defined MAC address 52:54:00:31:b4:9c in network mk-functional-579382
I1212 23:10:14.734288  151951 main.go:141] libmachine: (functional-579382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:b4:9c", ip: ""} in network mk-functional-579382: {Iface:virbr1 ExpiryTime:2023-12-13 00:07:13 +0000 UTC Type:0 Mac:52:54:00:31:b4:9c Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:functional-579382 Clientid:01:52:54:00:31:b4:9c}
I1212 23:10:14.734317  151951 main.go:141] libmachine: (functional-579382) DBG | domain functional-579382 has defined IP address 192.168.50.69 and MAC address 52:54:00:31:b4:9c in network mk-functional-579382
I1212 23:10:14.734495  151951 main.go:141] libmachine: (functional-579382) Calling .GetSSHPort
I1212 23:10:14.734686  151951 main.go:141] libmachine: (functional-579382) Calling .GetSSHKeyPath
I1212 23:10:14.734861  151951 main.go:141] libmachine: (functional-579382) Calling .GetSSHUsername
I1212 23:10:14.735010  151951 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/functional-579382/id_rsa Username:docker}
I1212 23:10:14.860334  151951 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 23:10:14.964510  151951 main.go:141] libmachine: Making call to close driver server
I1212 23:10:14.964544  151951 main.go:141] libmachine: (functional-579382) Calling .Close
I1212 23:10:14.964838  151951 main.go:141] libmachine: (functional-579382) DBG | Closing plugin on server side
I1212 23:10:14.964919  151951 main.go:141] libmachine: Successfully made call to close driver server
I1212 23:10:14.964935  151951 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 23:10:14.964954  151951 main.go:141] libmachine: Making call to close driver server
I1212 23:10:14.964968  151951 main.go:141] libmachine: (functional-579382) Calling .Close
I1212 23:10:14.965231  151951 main.go:141] libmachine: Successfully made call to close driver server
I1212 23:10:14.965250  151951 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-579382 ssh pgrep buildkitd: exit status 1 (265.427074ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image build -t localhost/my-image:functional-579382 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-579382 image build -t localhost/my-image:functional-579382 testdata/build --alsologtostderr: (4.622546837s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-579382 image build -t localhost/my-image:functional-579382 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b4496a6a91a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-579382
--> 2e552f1af45
Successfully tagged localhost/my-image:functional-579382
2e552f1af45777a65d908dd5f06bd32acc22371866d6af96463a714b60ea07c5
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-579382 image build -t localhost/my-image:functional-579382 testdata/build --alsologtostderr:
I1212 23:10:15.300691  152003 out.go:296] Setting OutFile to fd 1 ...
I1212 23:10:15.300854  152003 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 23:10:15.300864  152003 out.go:309] Setting ErrFile to fd 2...
I1212 23:10:15.300868  152003 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 23:10:15.301039  152003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
I1212 23:10:15.301624  152003 config.go:182] Loaded profile config "functional-579382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 23:10:15.302162  152003 config.go:182] Loaded profile config "functional-579382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 23:10:15.302541  152003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 23:10:15.302584  152003 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 23:10:15.321604  152003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37171
I1212 23:10:15.322139  152003 main.go:141] libmachine: () Calling .GetVersion
I1212 23:10:15.322863  152003 main.go:141] libmachine: Using API Version  1
I1212 23:10:15.322892  152003 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 23:10:15.323742  152003 main.go:141] libmachine: () Calling .GetMachineName
I1212 23:10:15.324025  152003 main.go:141] libmachine: (functional-579382) Calling .GetState
I1212 23:10:15.326219  152003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 23:10:15.326292  152003 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 23:10:15.341481  152003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46697
I1212 23:10:15.341833  152003 main.go:141] libmachine: () Calling .GetVersion
I1212 23:10:15.342286  152003 main.go:141] libmachine: Using API Version  1
I1212 23:10:15.342311  152003 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 23:10:15.342609  152003 main.go:141] libmachine: () Calling .GetMachineName
I1212 23:10:15.342769  152003 main.go:141] libmachine: (functional-579382) Calling .DriverName
I1212 23:10:15.342924  152003 ssh_runner.go:195] Run: systemctl --version
I1212 23:10:15.342948  152003 main.go:141] libmachine: (functional-579382) Calling .GetSSHHostname
I1212 23:10:15.345901  152003 main.go:141] libmachine: (functional-579382) DBG | domain functional-579382 has defined MAC address 52:54:00:31:b4:9c in network mk-functional-579382
I1212 23:10:15.346282  152003 main.go:141] libmachine: (functional-579382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:b4:9c", ip: ""} in network mk-functional-579382: {Iface:virbr1 ExpiryTime:2023-12-13 00:07:13 +0000 UTC Type:0 Mac:52:54:00:31:b4:9c Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:functional-579382 Clientid:01:52:54:00:31:b4:9c}
I1212 23:10:15.346301  152003 main.go:141] libmachine: (functional-579382) DBG | domain functional-579382 has defined IP address 192.168.50.69 and MAC address 52:54:00:31:b4:9c in network mk-functional-579382
I1212 23:10:15.346434  152003 main.go:141] libmachine: (functional-579382) Calling .GetSSHPort
I1212 23:10:15.346602  152003 main.go:141] libmachine: (functional-579382) Calling .GetSSHKeyPath
I1212 23:10:15.346689  152003 main.go:141] libmachine: (functional-579382) Calling .GetSSHUsername
I1212 23:10:15.346782  152003 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/functional-579382/id_rsa Username:docker}
I1212 23:10:15.486901  152003 build_images.go:151] Building image from path: /tmp/build.3698476414.tar
I1212 23:10:15.486960  152003 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 23:10:15.525664  152003 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3698476414.tar
I1212 23:10:15.557553  152003 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3698476414.tar: stat -c "%s %y" /var/lib/minikube/build/build.3698476414.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3698476414.tar': No such file or directory
I1212 23:10:15.557589  152003 ssh_runner.go:362] scp /tmp/build.3698476414.tar --> /var/lib/minikube/build/build.3698476414.tar (3072 bytes)
I1212 23:10:15.631067  152003 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3698476414
I1212 23:10:15.656156  152003 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3698476414 -xf /var/lib/minikube/build/build.3698476414.tar
I1212 23:10:15.671528  152003 crio.go:297] Building image: /var/lib/minikube/build/build.3698476414
I1212 23:10:15.671638  152003 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-579382 /var/lib/minikube/build/build.3698476414 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1212 23:10:19.831972  152003 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-579382 /var/lib/minikube/build/build.3698476414 --cgroup-manager=cgroupfs: (4.160301929s)
I1212 23:10:19.832049  152003 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3698476414
I1212 23:10:19.843499  152003 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3698476414.tar
I1212 23:10:19.852991  152003 build_images.go:207] Built localhost/my-image:functional-579382 from /tmp/build.3698476414.tar
I1212 23:10:19.853034  152003 build_images.go:123] succeeded building to: functional-579382
I1212 23:10:19.853039  152003 build_images.go:124] failed building to: 
I1212 23:10:19.853063  152003 main.go:141] libmachine: Making call to close driver server
I1212 23:10:19.853077  152003 main.go:141] libmachine: (functional-579382) Calling .Close
I1212 23:10:19.853385  152003 main.go:141] libmachine: (functional-579382) DBG | Closing plugin on server side
I1212 23:10:19.853393  152003 main.go:141] libmachine: Successfully made call to close driver server
I1212 23:10:19.853407  152003 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 23:10:19.853417  152003 main.go:141] libmachine: Making call to close driver server
I1212 23:10:19.853427  152003 main.go:141] libmachine: (functional-579382) Calling .Close
I1212 23:10:19.853665  152003 main.go:141] libmachine: (functional-579382) DBG | Closing plugin on server side
I1212 23:10:19.853692  152003 main.go:141] libmachine: Successfully made call to close driver server
I1212 23:10:19.853702  152003 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image ls
E1212 23:10:22.045949  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.173672157s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-579382
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image load --daemon gcr.io/google-containers/addon-resizer:functional-579382 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-579382 image load --daemon gcr.io/google-containers/addon-resizer:functional-579382 --alsologtostderr: (4.787442968s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.02s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image load --daemon gcr.io/google-containers/addon-resizer:functional-579382 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-579382 image load --daemon gcr.io/google-containers/addon-resizer:functional-579382 --alsologtostderr: (2.589201518s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.40216862s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-579382
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image load --daemon gcr.io/google-containers/addon-resizer:functional-579382 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-579382 image load --daemon gcr.io/google-containers/addon-resizer:functional-579382 --alsologtostderr: (9.467288551s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image save gcr.io/google-containers/addon-resizer:functional-579382 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-579382 image save gcr.io/google-containers/addon-resizer:functional-579382 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.525095937s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image rm gcr.io/google-containers/addon-resizer:functional-579382 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-579382 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.653139056s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-579382
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-579382 image save --daemon gcr.io/google-containers/addon-resizer:functional-579382 --alsologtostderr
E1212 23:10:11.805095  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:10:11.811005  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:10:11.821359  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:10:11.841736  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:10:11.882135  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:10:11.962556  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-579382 image save --daemon gcr.io/google-containers/addon-resizer:functional-579382 --alsologtostderr: (2.285183882s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-579382
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.32s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-579382
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-579382
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-579382
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (119.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-401709 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1212 23:10:32.286310  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:10:52.766599  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:11:33.727375  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-401709 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m59.393942437s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (119.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-401709 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-401709 addons enable ingress --alsologtostderr -v=5: (17.465023638s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.47s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-401709 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-998704 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1212 23:15:39.488585  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:15:49.540706  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-998704 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.905235076s)
--- PASS: TestJSONOutput/start/Command (61.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-998704 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-998704 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-998704 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-998704 --output=json --user=testUser: (7.104696159s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-027581 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-027581 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.441727ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f8ead907-68e2-41e7-a06d-69bfbb17fc72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-027581] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7068208b-e68f-45f0-a986-add5853588c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17777"}}
	{"specversion":"1.0","id":"7b7b6e5c-d613-466a-8301-8c828e790e55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3c1d7576-544e-4bba-9ce0-f4602698be83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig"}}
	{"specversion":"1.0","id":"1ad6e39f-a0c3-476b-abd3-653cd3560992","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube"}}
	{"specversion":"1.0","id":"6ab312af-0b60-43c0-8ac4-d49d4fa36bb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"268c035c-6ebf-4638-9c49-320dc4b390a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cf863793-2cb3-4f06-968a-722f2edfa395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-027581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-027581
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (95.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-589393 --driver=kvm2  --container-runtime=crio
E1212 23:17:11.461667  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-589393 --driver=kvm2  --container-runtime=crio: (46.400462364s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-591922 --driver=kvm2  --container-runtime=crio
E1212 23:17:45.323262  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:17:45.328516  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:17:45.338785  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:17:45.359103  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:17:45.399452  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:17:45.479928  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:17:45.640359  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:17:45.960963  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:17:46.602030  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:17:47.882733  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:17:50.443284  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:17:55.696241  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:18:05.936587  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-591922 --driver=kvm2  --container-runtime=crio: (46.723638849s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-589393
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-591922
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-591922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-591922
helpers_test.go:175: Cleaning up "first-589393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-589393
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-589393: (1.000848656s)
--- PASS: TestMinikubeProfile (95.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-260411 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1212 23:18:26.417570  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-260411 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.909711925s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-260411 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-260411 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-275829 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1212 23:19:07.378154  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-275829 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.731113367s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-275829 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-275829 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-260411 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-275829 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-275829 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.12s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-275829
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-275829: (1.115488814s)
--- PASS: TestMountStart/serial/Stop (1.12s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-275829
E1212 23:19:27.616560  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-275829: (24.315168284s)
--- PASS: TestMountStart/serial/RestartStopped (25.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-275829 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-275829 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-510563 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1212 23:19:55.301869  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:20:11.805014  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:20:29.298676  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-510563 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.333742372s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-510563 -- rollout status deployment/busybox: (6.198456232s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- exec busybox-5bc68d56bd-4vnmj -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- exec busybox-5bc68d56bd-6hjc6 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- exec busybox-5bc68d56bd-4vnmj -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- exec busybox-5bc68d56bd-6hjc6 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- exec busybox-5bc68d56bd-4vnmj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-510563 -- exec busybox-5bc68d56bd-6hjc6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-510563 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-510563 -v 3 --alsologtostderr: (44.58797938s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.19s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-510563 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 cp testdata/cp-test.txt multinode-510563:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 cp multinode-510563:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1537792593/001/cp-test_multinode-510563.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 cp multinode-510563:/home/docker/cp-test.txt multinode-510563-m02:/home/docker/cp-test_multinode-510563_multinode-510563-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563-m02 "sudo cat /home/docker/cp-test_multinode-510563_multinode-510563-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 cp multinode-510563:/home/docker/cp-test.txt multinode-510563-m03:/home/docker/cp-test_multinode-510563_multinode-510563-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563-m03 "sudo cat /home/docker/cp-test_multinode-510563_multinode-510563-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 cp testdata/cp-test.txt multinode-510563-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 cp multinode-510563-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1537792593/001/cp-test_multinode-510563-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 cp multinode-510563-m02:/home/docker/cp-test.txt multinode-510563:/home/docker/cp-test_multinode-510563-m02_multinode-510563.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563-m02 "sudo cat /home/docker/cp-test.txt"
E1212 23:22:45.321210  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563 "sudo cat /home/docker/cp-test_multinode-510563-m02_multinode-510563.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 cp multinode-510563-m02:/home/docker/cp-test.txt multinode-510563-m03:/home/docker/cp-test_multinode-510563-m02_multinode-510563-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563-m03 "sudo cat /home/docker/cp-test_multinode-510563-m02_multinode-510563-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 cp testdata/cp-test.txt multinode-510563-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 cp multinode-510563-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1537792593/001/cp-test_multinode-510563-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 cp multinode-510563-m03:/home/docker/cp-test.txt multinode-510563:/home/docker/cp-test_multinode-510563-m03_multinode-510563.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563 "sudo cat /home/docker/cp-test_multinode-510563-m03_multinode-510563.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 cp multinode-510563-m03:/home/docker/cp-test.txt multinode-510563-m02:/home/docker/cp-test_multinode-510563-m03_multinode-510563-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 ssh -n multinode-510563-m02 "sudo cat /home/docker/cp-test_multinode-510563-m03_multinode-510563-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-510563 node stop m03: (2.097670236s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-510563 status: exit status 7 (450.278403ms)

                                                
                                                
-- stdout --
	multinode-510563
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-510563-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-510563-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-510563 status --alsologtostderr: exit status 7 (446.03074ms)

                                                
                                                
-- stdout --
	multinode-510563
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-510563-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-510563-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 23:22:51.370470  159458 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:22:51.370739  159458 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:22:51.370751  159458 out.go:309] Setting ErrFile to fd 2...
	I1212 23:22:51.370756  159458 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:22:51.370950  159458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1212 23:22:51.371125  159458 out.go:303] Setting JSON to false
	I1212 23:22:51.371165  159458 mustload.go:65] Loading cluster: multinode-510563
	I1212 23:22:51.371321  159458 notify.go:220] Checking for updates...
	I1212 23:22:51.371685  159458 config.go:182] Loaded profile config "multinode-510563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:22:51.371706  159458 status.go:255] checking status of multinode-510563 ...
	I1212 23:22:51.372235  159458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:22:51.372309  159458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:22:51.387962  159458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I1212 23:22:51.388369  159458 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:22:51.389011  159458 main.go:141] libmachine: Using API Version  1
	I1212 23:22:51.389031  159458 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:22:51.389377  159458 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:22:51.389575  159458 main.go:141] libmachine: (multinode-510563) Calling .GetState
	I1212 23:22:51.391263  159458 status.go:330] multinode-510563 host status = "Running" (err=<nil>)
	I1212 23:22:51.391282  159458 host.go:66] Checking if "multinode-510563" exists ...
	I1212 23:22:51.391578  159458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:22:51.391625  159458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:22:51.406961  159458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37471
	I1212 23:22:51.407312  159458 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:22:51.407740  159458 main.go:141] libmachine: Using API Version  1
	I1212 23:22:51.407759  159458 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:22:51.408094  159458 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:22:51.408301  159458 main.go:141] libmachine: (multinode-510563) Calling .GetIP
	I1212 23:22:51.410873  159458 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:22:51.411236  159458 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:22:51.411279  159458 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:22:51.411357  159458 host.go:66] Checking if "multinode-510563" exists ...
	I1212 23:22:51.411666  159458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:22:51.411719  159458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:22:51.425696  159458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I1212 23:22:51.426119  159458 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:22:51.426546  159458 main.go:141] libmachine: Using API Version  1
	I1212 23:22:51.426564  159458 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:22:51.426922  159458 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:22:51.427078  159458 main.go:141] libmachine: (multinode-510563) Calling .DriverName
	I1212 23:22:51.427264  159458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 23:22:51.427288  159458 main.go:141] libmachine: (multinode-510563) Calling .GetSSHHostname
	I1212 23:22:51.429931  159458 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:22:51.430309  159458 main.go:141] libmachine: (multinode-510563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:9f:26", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:20:05 +0000 UTC Type:0 Mac:52:54:00:2d:9f:26 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-510563 Clientid:01:52:54:00:2d:9f:26}
	I1212 23:22:51.430332  159458 main.go:141] libmachine: (multinode-510563) DBG | domain multinode-510563 has defined IP address 192.168.39.38 and MAC address 52:54:00:2d:9f:26 in network mk-multinode-510563
	I1212 23:22:51.430503  159458 main.go:141] libmachine: (multinode-510563) Calling .GetSSHPort
	I1212 23:22:51.430651  159458 main.go:141] libmachine: (multinode-510563) Calling .GetSSHKeyPath
	I1212 23:22:51.430773  159458 main.go:141] libmachine: (multinode-510563) Calling .GetSSHUsername
	I1212 23:22:51.430923  159458 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563/id_rsa Username:docker}
	I1212 23:22:51.521475  159458 ssh_runner.go:195] Run: systemctl --version
	I1212 23:22:51.527154  159458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:51.542522  159458 kubeconfig.go:92] found "multinode-510563" server: "https://192.168.39.38:8443"
	I1212 23:22:51.542549  159458 api_server.go:166] Checking apiserver status ...
	I1212 23:22:51.542592  159458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:22:51.556470  159458 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	I1212 23:22:51.567956  159458 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod4b970951c1b4ca2bc525afa7c2eb2fef/crio-0fe05b10bfcd6f6175b47556313838815da9a96a03c510f0440e507fb82c5f96"
	I1212 23:22:51.568009  159458 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4b970951c1b4ca2bc525afa7c2eb2fef/crio-0fe05b10bfcd6f6175b47556313838815da9a96a03c510f0440e507fb82c5f96/freezer.state
	I1212 23:22:51.579351  159458 api_server.go:204] freezer state: "THAWED"
	I1212 23:22:51.579377  159458 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I1212 23:22:51.584935  159458 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I1212 23:22:51.584961  159458 status.go:421] multinode-510563 apiserver status = Running (err=<nil>)
	I1212 23:22:51.584974  159458 status.go:257] multinode-510563 status: &{Name:multinode-510563 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 23:22:51.585005  159458 status.go:255] checking status of multinode-510563-m02 ...
	I1212 23:22:51.585407  159458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:22:51.585476  159458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:22:51.600540  159458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I1212 23:22:51.600934  159458 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:22:51.601423  159458 main.go:141] libmachine: Using API Version  1
	I1212 23:22:51.601451  159458 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:22:51.601766  159458 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:22:51.601971  159458 main.go:141] libmachine: (multinode-510563-m02) Calling .GetState
	I1212 23:22:51.603408  159458 status.go:330] multinode-510563-m02 host status = "Running" (err=<nil>)
	I1212 23:22:51.603424  159458 host.go:66] Checking if "multinode-510563-m02" exists ...
	I1212 23:22:51.603712  159458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:22:51.603749  159458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:22:51.618185  159458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I1212 23:22:51.618572  159458 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:22:51.619085  159458 main.go:141] libmachine: Using API Version  1
	I1212 23:22:51.619109  159458 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:22:51.619375  159458 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:22:51.619562  159458 main.go:141] libmachine: (multinode-510563-m02) Calling .GetIP
	I1212 23:22:51.622311  159458 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:22:51.622741  159458 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:22:51.622779  159458 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:22:51.622983  159458 host.go:66] Checking if "multinode-510563-m02" exists ...
	I1212 23:22:51.623309  159458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:22:51.623355  159458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:22:51.637666  159458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44471
	I1212 23:22:51.638029  159458 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:22:51.638434  159458 main.go:141] libmachine: Using API Version  1
	I1212 23:22:51.638457  159458 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:22:51.638724  159458 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:22:51.638865  159458 main.go:141] libmachine: (multinode-510563-m02) Calling .DriverName
	I1212 23:22:51.639017  159458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 23:22:51.639040  159458 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHHostname
	I1212 23:22:51.641474  159458 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:22:51.641866  159458 main.go:141] libmachine: (multinode-510563-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:30:41", ip: ""} in network mk-multinode-510563: {Iface:virbr1 ExpiryTime:2023-12-13 00:21:15 +0000 UTC Type:0 Mac:52:54:00:e2:30:41 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-510563-m02 Clientid:01:52:54:00:e2:30:41}
	I1212 23:22:51.641906  159458 main.go:141] libmachine: (multinode-510563-m02) DBG | domain multinode-510563-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:e2:30:41 in network mk-multinode-510563
	I1212 23:22:51.642042  159458 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHPort
	I1212 23:22:51.642197  159458 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHKeyPath
	I1212 23:22:51.642318  159458 main.go:141] libmachine: (multinode-510563-m02) Calling .GetSSHUsername
	I1212 23:22:51.642417  159458 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17777-136241/.minikube/machines/multinode-510563-m02/id_rsa Username:docker}
	I1212 23:22:51.727796  159458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:51.740148  159458 status.go:257] multinode-510563-m02 status: &{Name:multinode-510563-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 23:22:51.740185  159458 status.go:255] checking status of multinode-510563-m03 ...
	I1212 23:22:51.740541  159458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:22:51.740585  159458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:22:51.755255  159458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I1212 23:22:51.755612  159458 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:22:51.756034  159458 main.go:141] libmachine: Using API Version  1
	I1212 23:22:51.756056  159458 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:22:51.756426  159458 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:22:51.756732  159458 main.go:141] libmachine: (multinode-510563-m03) Calling .GetState
	I1212 23:22:51.758186  159458 status.go:330] multinode-510563-m03 host status = "Stopped" (err=<nil>)
	I1212 23:22:51.758199  159458 status.go:343] host is not running, skipping remaining checks
	I1212 23:22:51.758207  159458 status.go:257] multinode-510563-m03 status: &{Name:multinode-510563-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.99s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 node start m03 --alsologtostderr
E1212 23:23:13.139011  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-510563 node start m03 --alsologtostderr: (31.192199525s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-510563 node delete m03: (1.2460468s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (454.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-510563 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1212 23:37:45.323181  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:39:27.617309  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:40:11.805274  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:42:45.320621  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1212 23:43:14.852743  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:44:27.616610  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-510563 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m34.011786939s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-510563 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (454.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-510563
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-510563-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-510563-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.969891ms)

                                                
                                                
-- stdout --
	* [multinode-510563-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17777
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-510563-m02' is duplicated with machine name 'multinode-510563-m02' in profile 'multinode-510563'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-510563-m03 --driver=kvm2  --container-runtime=crio
E1212 23:45:11.804584  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-510563-m03 --driver=kvm2  --container-runtime=crio: (47.65474718s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-510563
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-510563: exit status 80 (232.639436ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-510563
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-510563-m03 already exists in multinode-510563-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-510563-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.99s)

                                                
                                    
x
+
TestScheduledStopUnix (118.29s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-388290 --memory=2048 --driver=kvm2  --container-runtime=crio
E1212 23:50:11.804637  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1212 23:50:48.500633  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-388290 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.541785893s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-388290 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-388290 -n scheduled-stop-388290
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-388290 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-388290 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-388290 -n scheduled-stop-388290
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-388290
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-388290 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-388290
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-388290: exit status 7 (75.252608ms)

                                                
                                                
-- stdout --
	scheduled-stop-388290
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-388290 -n scheduled-stop-388290
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-388290 -n scheduled-stop-388290: exit status 7 (74.689418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-388290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-388290
--- PASS: TestScheduledStopUnix (118.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (190.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-961264 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-961264 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m56.767567902s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-961264
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-961264: (2.137686572s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-961264 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-961264 status --format={{.Host}}: exit status 7 (96.728718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-961264 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-961264 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.92005439s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-961264 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-961264 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-961264 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (111.22869ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-961264] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17777
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-961264
	    minikube start -p kubernetes-upgrade-961264 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9612642 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-961264 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-961264 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-961264 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.594811041s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-961264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-961264
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-961264: (1.10891341s)
--- PASS: TestKubernetesUpgrade (190.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-269833 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-269833 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (100.595687ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-269833] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17777
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (77.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-269833 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-269833 --driver=kvm2  --container-runtime=crio: (1m17.370026847s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-269833 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (77.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-120988 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-120988 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (122.874825ms)

                                                
                                                
-- stdout --
	* [false-120988] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17777
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 23:52:11.264163  167673 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:52:11.264507  167673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:52:11.264519  167673 out.go:309] Setting ErrFile to fd 2...
	I1212 23:52:11.264526  167673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:52:11.264734  167673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17777-136241/.minikube/bin
	I1212 23:52:11.265377  167673 out.go:303] Setting JSON to false
	I1212 23:52:11.266397  167673 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9279,"bootTime":1702415852,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:52:11.266496  167673 start.go:138] virtualization: kvm guest
	I1212 23:52:11.268863  167673 out.go:177] * [false-120988] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:52:11.271662  167673 out.go:177]   - MINIKUBE_LOCATION=17777
	I1212 23:52:11.273252  167673 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:52:11.271703  167673 notify.go:220] Checking for updates...
	I1212 23:52:11.274964  167673 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17777-136241/kubeconfig
	I1212 23:52:11.276484  167673 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17777-136241/.minikube
	I1212 23:52:11.278071  167673 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:52:11.279595  167673 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:52:11.281650  167673 config.go:182] Loaded profile config "NoKubernetes-269833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:52:11.281754  167673 config.go:182] Loaded profile config "offline-crio-245437": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:52:11.281826  167673 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:52:11.319363  167673 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 23:52:11.320878  167673 start.go:298] selected driver: kvm2
	I1212 23:52:11.320900  167673 start.go:902] validating driver "kvm2" against <nil>
	I1212 23:52:11.320912  167673 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:52:11.323186  167673 out.go:177] 
	W1212 23:52:11.324640  167673 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1212 23:52:11.326254  167673 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-120988 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-120988

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-120988

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-120988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-120988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-120988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-120988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-120988

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-120988

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-120988

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-120988

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-120988

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-120988" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-120988" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-120988

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120988"

                                                
                                                
----------------------- debugLogs end: false-120988 [took: 3.297305053s] --------------------------------
helpers_test.go:175: Cleaning up "false-120988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-120988
--- PASS: TestNetworkPlugins/group/false (3.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-269833 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-269833 --no-kubernetes --driver=kvm2  --container-runtime=crio: (14.249339885s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-269833 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-269833 status -o json: exit status 2 (300.771887ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-269833","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-269833
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-269833: (1.766005385s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (32.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-269833 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-269833 --no-kubernetes --driver=kvm2  --container-runtime=crio: (32.154599524s)
--- PASS: TestNoKubernetes/serial/Start (32.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-269833 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-269833 "sudo systemctl is-active --quiet service kubelet": exit status 1 (221.510637ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-269833
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-269833: (1.185824725s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (78.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-269833 --driver=kvm2  --container-runtime=crio
E1212 23:54:27.616648  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-269833 --driver=kvm2  --container-runtime=crio: (1m18.434369743s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (78.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-269833 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-269833 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.389895ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Start (128.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-042245 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-042245 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m8.340537557s)
--- PASS: TestPause/serial/Start (128.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (122.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-508612 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-508612 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m2.556403391s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (122.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (169.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-143586 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-143586 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m49.586699456s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (169.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (146.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-335807 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1212 23:59:27.617472  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1212 23:59:54.853282  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-335807 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m26.363018315s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (146.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-884273
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-508612 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c407650a-dd6e-4582-bff0-c017ff268caa] Pending
helpers_test.go:344: "busybox" [c407650a-dd6e-4582-bff0-c017ff268caa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c407650a-dd6e-4582-bff0-c017ff268caa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.034186201s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-508612 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-743278 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-743278 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m2.741925529s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-508612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-508612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.061292981s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-508612 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-335807 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ec0db89-cddf-47d4-85ec-dc14135752fb] Pending
helpers_test.go:344: "busybox" [2ec0db89-cddf-47d4-85ec-dc14135752fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2ec0db89-cddf-47d4-85ec-dc14135752fb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.796691749s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-335807 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-335807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-335807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.165024754s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-335807 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-143586 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0c0bfa44-32f5-4aa7-aca2-55232d650fa5] Pending
helpers_test.go:344: "busybox" [0c0bfa44-32f5-4aa7-aca2-55232d650fa5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0c0bfa44-32f5-4aa7-aca2-55232d650fa5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.036909391s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-143586 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-743278 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3227111a-055e-48bc-abe1-5162c09b58da] Pending
helpers_test.go:344: "busybox" [3227111a-055e-48bc-abe1-5162c09b58da] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3227111a-055e-48bc-abe1-5162c09b58da] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.030367745s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-743278 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-143586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-143586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.029717595s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-143586 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-743278 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-743278 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.046583243s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-743278 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (793.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-508612 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-508612 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (13m12.894676826s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-508612 -n old-k8s-version-508612
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (793.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (593.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-335807 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-335807 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m53.122788454s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-335807 -n embed-certs-335807
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (593.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (611.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-143586 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-143586 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m10.808394948s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143586 -n no-preload-143586
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (611.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (574.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-743278 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1213 00:05:11.805048  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1213 00:07:28.501718  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1213 00:07:45.321725  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
E1213 00:09:27.617586  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
E1213 00:10:11.805246  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1213 00:12:45.320771  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-743278 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m34.104384813s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-743278 -n default-k8s-diff-port-743278
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (574.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-628189 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-628189 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m0.063657077s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (125.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m5.167264021s)
--- PASS: TestNetworkPlugins/group/auto/Start (125.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (110s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1213 00:29:27.617106  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/functional-579382/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m49.998224s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (110.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-628189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-628189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.918713451s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-628189 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-628189 --alsologtostderr -v=3: (2.414393941s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-628189 -n newest-cni-628189
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-628189 -n newest-cni-628189: exit status 7 (101.690269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-628189 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (66.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-628189 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1213 00:30:11.804977  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1213 00:30:51.615463  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
E1213 00:30:51.621366  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
E1213 00:30:51.632000  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
E1213 00:30:51.652309  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
E1213 00:30:51.692880  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-628189 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m5.820720323s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-628189 -n newest-cni-628189
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (66.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gj5qk" [cd5e2271-699a-42c0-a25c-d12e7ec0aaa0] Running
E1213 00:30:51.773197  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
E1213 00:30:51.934281  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
E1213 00:30:52.254697  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.025266375s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m40.536752917s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-120988 "pgrep -a kubelet"
E1213 00:30:56.737697  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-120988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m6jbg" [28b20a54-e053-4039-8c1f-3a5458d17a68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-m6jbg" [28b20a54-e053-4039-8c1f-3a5458d17a68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.010557927s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-120988 "pgrep -a kubelet"
E1213 00:31:01.858245  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-120988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g8kfr" [f8cdc1c1-4b06-42c5-a1ba-02ec69737993] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g8kfr" [f8cdc1c1-4b06-42c5-a1ba-02ec69737993] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.013028844s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-628189 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-628189 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-628189 -n newest-cni-628189
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-628189 -n newest-cni-628189: exit status 2 (320.409805ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-628189 -n newest-cni-628189
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-628189 -n newest-cni-628189: exit status 2 (299.92544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-628189 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-628189 -n newest-cni-628189
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-628189 -n newest-cni-628189
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.18s)
E1213 00:33:35.459856  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (107.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m47.36281264s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (107.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-120988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-120988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (127.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1213 00:31:32.579346  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m7.618753801s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (127.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (131.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1213 00:31:43.697661  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:31:43.702978  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:31:43.713231  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:31:43.733536  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:31:43.773870  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:31:43.854274  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:31:44.014746  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:31:44.335384  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:31:44.975576  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:31:46.256454  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:31:48.817012  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:31:53.937949  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:31:55.482581  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
E1213 00:31:55.657882  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
E1213 00:31:55.668188  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
E1213 00:31:55.688375  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
E1213 00:31:55.728794  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
E1213 00:31:55.809860  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
E1213 00:31:55.969996  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
E1213 00:31:56.290589  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
E1213 00:31:56.931690  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
E1213 00:31:58.212882  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
E1213 00:32:00.773675  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
E1213 00:32:04.178301  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:32:05.894407  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
E1213 00:32:13.539625  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/old-k8s-version-508612/client.crt: no such file or directory
E1213 00:32:16.135259  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
E1213 00:32:24.659378  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
E1213 00:32:36.616031  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m11.209532329s)
--- PASS: TestNetworkPlugins/group/flannel/Start (131.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-l5hdt" [328e558c-c6fd-4af1-8716-98a1420eeae0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.023213268s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-120988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-120988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-htsgb" [a3a675a6-76a6-4701-8bb5-5138c8facabf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 00:32:45.320488  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/ingress-addon-legacy-401709/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-htsgb" [a3a675a6-76a6-4701-8bb5-5138c8facabf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.012586658s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-120988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-120988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-120988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4q7gj" [b09759ac-35fe-4847-a086-4b021319cd8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4q7gj" [b09759ac-35fe-4847-a086-4b021319cd8f] Running
E1213 00:33:05.620123  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/no-preload-143586/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.012168438s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-120988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (106.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1213 00:33:14.854291  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
E1213 00:33:17.576582  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/default-k8s-diff-port-743278/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-120988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m46.817391077s)
--- PASS: TestNetworkPlugins/group/bridge/Start (106.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-120988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-120988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-brwvm" [2a31ca17-78a2-4219-9e19-49062d01fdbb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-brwvm" [2a31ca17-78a2-4219-9e19-49062d01fdbb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.011957408s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4qhv7" [bd475428-c8b5-432d-a3bd-0f2621f344c1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.023491919s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-120988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-120988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-120988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fklmm" [2ca3b906-1c56-4685-8eab-d07a90af9898] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fklmm" [2ca3b906-1c56-4685-8eab-d07a90af9898] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.009555575s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-120988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-120988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-120988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s2zxk" [15ca7ec0-cfba-4647-9421-eaffd7b2963c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s2zxk" [15ca7ec0-cfba-4647-9421-eaffd7b2963c] Running
E1213 00:35:11.805168  143541 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17777-136241/.minikube/profiles/addons-577685/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.011112718s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-120988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-120988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (39/299)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestGvisorAddon 0
158 TestImageBuild 0
191 TestKicCustomNetwork 0
192 TestKicExistingNetwork 0
193 TestKicCustomSubnet 0
194 TestKicStaticIP 0
226 TestChangeNoneUser 0
229 TestScheduledStopWindows 0
231 TestSkaffold 0
233 TestInsufficientStorage 0
237 TestMissingContainerUpgrade 0
245 TestStartStop/group/disable-driver-mounts 0.16
249 TestNetworkPlugins/group/kubenet 3.48
258 TestNetworkPlugins/group/cilium 4.42
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-343019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-343019
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-120988 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-120988

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-120988

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-120988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-120988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-120988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-120988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-120988

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-120988

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-120988

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-120988

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-120988

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-120988" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-120988" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-120988

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120988"

                                                
                                                
----------------------- debugLogs end: kubenet-120988 [took: 3.336758056s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-120988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-120988
--- SKIP: TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-120988 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-120988" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-120988

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-120988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120988"

                                                
                                                
----------------------- debugLogs end: cilium-120988 [took: 4.245996655s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-120988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-120988
--- SKIP: TestNetworkPlugins/group/cilium (4.42s)

                                                
                                    
Copied to clipboard